Help W/Unknown Artifact Error

I am newbie to Concourse…I am running some test code and keep getting “unknown artifact source:”

I can’t find what this really means…the path to my .yml file in the repo that from my understand is being pulled is there…can someone point me in the right direction…as the errors seem so cryptic…thanks a ton.

Regards

Can you post a snippet of the pipeline/task that is throwing that error? Usually errors like that come from misconfigurations in the YAML.

Hi crsimmons,

Here is my test code:

resource_types:
- name: myimage
  type: docker-image
  source:
    repository: chuckwilliams10/gcp_tf
    tag: latest

resources:
- name: new gcp test
  type: git
  source: {uri: "https://github.com/chuckwilliams10/cw1-gcp-tf"}

jobs:
- name: new gcp test
  plan:
  - get: new gcp test
    public: true
  - task: run-auth
    file: cw1-gcp-tf/blueprint/setup_scripts/concourse/tasks/run-auth.yml

    params:
          GOOGLE_CREDENTIALS: ((creds))
          CUSTOMER_IDENTIFIER_PREFIX: ((name))
          ORG_ID: ((org-id))
          BILLING_ACCOUNT_ID: ((billing-account))
          BUCKET_NAME: ((bucket-name))
          BUCKET_LOCATION: ((bucket-location))

A couple of problems I see in the pipeline and In the task.yml:

  • public: true is an option on job rather than on get.
  • You define a docker image resource in the pipeline and also in the task. You only need one. To use the one defined in the pipeline you can get: myimage in the job then add image: myimage at the same level as file and params in the task.
  • In the task you define
inputs:
  - name: blueprint

but you don’t get that resource in your job. This is most likely the cause of the error you’re seeing.

  • You’re defining parameter interpolation in both the pipeline and the task. You only need the ((value)) in the pipeline. The params block in the task only needs to be:
params:
  GOOGLE_CREDENTIALS:
  CUSTOMER_IDENTIFIER_PREFIX:
  ORG_ID:
  BILLING_ACCOUNT_ID:
  BUCKET_NAME:
  BUCKET_LOCATION:

Hi crsimmons,

Thanks a ton…do you have bandwidth for a quick call on Hangouts or another system…I promise it will be very, very quick…thanks a ton.

@crsimmons–Are you saying the code should look like this now:

task file:

#platform: linux
#image_resource:
# type: docker-image
#  source: {repository: chuckwilliams10/gcp_tf, tag: latest}
#inputs:
# - name: blueprint
run:
  path: blueprint/setup_scripts/concourse/scripts/setup.sh

image: myimage

params:
 GOOGLE_CREDENTIALS: 
 CUSTOMER_IDENTIFIER_PREFIX: 
 ORG_ID: 
 BILLING_ACCOUNT_ID: 
 BUCKET_NAME: 
 BUCKET_LOCATION: 

pipeline file

---
resource_types:
- name: myimage
  type: docker-image
  source:
    repository: chuckwilliams10/gcp_tf
    tag: latest

resources:
- name: new gcp test
  type: git
  source: {uri: "https://github.com/chuckwilliams10/cw1-gcp-tf"}

jobs:
- name: new gcp test
  plan:
  - get: myimage
  - task: run-auth
    file: cw1-gcp-tf/blueprint/setup_scripts/concourse/tasks/run-auth.yml

    params: {((name)), ((org-id)), ((billing-account)), ((bucket-name)), ((bucket-location)), ((cred))}

…thanks a ton for your help as I am just trying to fully understand how it works so I can build some robust pipelines.

Not quite.

Pipeline should be:

resource_types:
- name: myimage
  type: docker-image
  source:
    repository: chuckwilliams10/gcp_tf
    tag: latest

resources:
- name: new-gcp-test
  type: git
  source: {uri: "https://github.com/chuckwilliams10/cw1-gcp-tf"}

jobs:
- name: new gcp test
  public: true
  plan:
  - get: new-gcp-test
  - get: myimage
  - task: run-auth
    file: new-gcp-test/blueprint/setup_scripts/concourse/tasks/run-auth.yml
    image: myimage
    params:
      GOOGLE_CREDENTIALS: ((creds))
      CUSTOMER_IDENTIFIER_PREFIX: ((name))
      ORG_ID: ((org-id))
      BILLING_ACCOUNT_ID: ((billing-account))
      BUCKET_NAME: ((bucket-name))
      BUCKET_LOCATION: ((bucket-location))

Note:

  • the name of the new-gcp-test resource matches the get which also matches the path of the task in file.
  • the get: myimage and the associated image: myimage
  • public: true is at the job level rather than the get level.

Task should be:

---
platform: linux

inputs:
- name: new-gcp-test

run:
  path: new-gcp-test/blueprint/setup_scripts/concourse/scripts/setup.sh

params:
 GOOGLE_CREDENTIALS: 
 CUSTOMER_IDENTIFIER_PREFIX: 
 ORG_ID: 
 BILLING_ACCOUNT_ID: 
 BUCKET_NAME: 
 BUCKET_LOCATION: 

Note:

  • you don’t need to have anything about image here because you have specified which image to use in the pipeline.
  • you still need platform: linux so that Concourse knows what type of worker to run this task on (you likely only have linux workers)
  • you still need the inputs block but the input should be new-gcp-test instead of blueprint as this must match the resource in the get for this job in the pipeline. Defining an input tells Concourse to mount this resource as a volume when the task container is started. As a consequence the path to the shell script in run must specify that the blueprint directory is inside of the new-gcp-test input.

@crsimmons–THANKS A TON…for clearing this up…it is starting to make some sense to of how Concsourse works…I made the suggested changes and now getting this:

"error: invalid configuration:

invalid jobs:

jobs.new gcp test.plan[1].get.myimage refers to a resource that does not exist "

…stranges…isn’t the “get: myimage” code is referring to the docker image that should be pulled down to run the tasks–>shell scripts in? My shell scripts are TF commands and the docker image chuckwilliams10/gcp_tf has the needed version of TF pre-baked in…am I missing something…thanks a ton.

Regards

That one’s my bad. The docker image resource is included in Concourse so you don’t need to specify it as a resource_type.

- name: myimage
  type: docker-image
  source:
    repository: chuckwilliams10/gcp_tf
    tag: latest

should be under resources like your git resource is.

@crsimmons–thanks a ton…I made the last adjustment and when the pipeline runs…errors out with
“failed to load new-gcp-test/blueprint/setup_scripts/concourse/tasks/run-auth.yml: invalid task configuration:
missing ‘platform’
missing path to executable to run”

in my repo, does path to the task file need to be changed …with the name of my repo “cw1-gcp-tf” in the front instead of what is there now “new-gcp-test”?..thanks a ton.

The path to the task file in file: new-gcp-test/blueprint/setup_scripts/concourse/tasks/run-auth.yml is relative to the root of the container. This directory will contain a directory for each input (named after those inputs). In your case you should have a resource defined in your pipeline called new-gcp-test which points to your git repo called cw1-gcp-tf. In your job you have get: new-gcp-test and in your task you have new-gcp-test as an input. Therefore when your container starts there will be a directory in the starting location called new-gcp-test which contains whatever is in the repo cw1-gcp-tf.

The actual name of your repo only matters when defining the git resource at the start of the pipeline. After that you only reference that repo by the name of the resource which in your case is new-gcp-test.

Are you sure you changed you task config file to start with:

---
platform: linux

inputs:
- name: new-gcp-test

run:
  path: new-gcp-test/blueprint/setup_scripts/concourse/scripts/setup.sh

Both your errors suggest your task yaml isn’t right. missing 'platform' should be remedied by adding platform: linux and the other error suggests whatever path you’ve given isn’t a valid location in the container.

The easiest way to debug/understand this is to intercept the container that is running the task then poke around with tools like ls to see what is actually in there.

1 Like

@crsimmons- Hi below are the current state of each

pipleline file called gcp_test.yml

---
resources:
- name: new-gcp-test
  type: git
  source: {uri: "https://github.com/chuckwilliams10/cw1-gcp-tf"}

- name: myimage
  type: docker-image
  source:
    repository: chuckwilliams10/gcp_tf
    tag: latest

jobs:
- name: new gcp test
  public: true
  plan:
  - get: new-gcp-test
  - get: myimage
  - task: run-auth
    file: new-gcp-test/blueprint/setup_scripts/concourse/tasks/run-auth.yml
    image: myimage
    params:
      GOOGLE_CREDENTIALS: ((creds))
      CUSTOMER_IDENTIFIER_PREFIX: ((name))
      ORG_ID: ((org-id))
      BILLING_ACCOUNT_ID: ((billing-account))
      BUCKET_NAME: ((bucket-name))
      BUCKET_LOCATION: ((bucket-location))

task file called run-auth.yml

---
platform: linux

inputs:
- name: new-gcp-test

run:
  path: new-gcp-test/blueprint/setup_scripts/concourse/scripts/setup.sh

params:
 GOOGLE_CREDENTIALS: 
 CUSTOMER_IDENTIFIER_PREFIX: 
 ORG_ID: 
 BILLING_ACCOUNT_ID: 
 BUCKET_NAME: 
 BUCKET_LOCATION: 

I can’t think of anything that would cause the error. So I am clear…my image on has the Terraform binary and GCP SDK so that the scripts can run which has some TF and requires authenticating into GCP via gcloud commands…my image does not need files from the repo…correct? Doesn’t the files from the repo get uploaded to the container that runs the task?

Thanks a ton for all your help here…

I tried running your pipeline with some dummy params on my own Concourse. Since my params aren’t valid I get ERROR: (gcloud.auth.activate-service-account) Could not read json file ./credentials.json: No JSON object could be decoded when I run the job but as far as I can tell the task is starting properly and the repo files are present in the container. When I intercept the task container I see:

$ fly -t my-concourse intercept -j test/new\ gcp\ test
1: build #1, step: myimage, type: get
2: build #1, step: new-gcp-test, type: get
3: build #1, step: run-auth, type: task
choose a container: 3
bash-4.3# ls
credentials.json  new-gcp-test
bash-4.3# ls new-gcp-test
CHANGELOG.md             Dockerfile               bitbucket-pipelines.yml  blueprint.yml            codeship-steps.yml       deployment               docs                     env.encrypted            org-level
DECISIONLOG.md           README.md                blueprint                codeship-services.yml    codeship_scripts         dockercfg.encrypted      env.decrypted.example    modules                  resources

This makes me think any error you are seeing at this point is to do with the script itself.

thanks a ton…I’ll check the scripts and doing some testing with a separate test pipeline files, test task file and test script.
When I run my new test …getting this error:

Backend error: Exit status: 500, message: {"Type":"","Message":"runc exec: exit status 1: exec failed: container_linux.go:348: starting container process caused \"exec: \\\"test-pipeline/blueprint/setup_scripts/concourse/scripts/test-script.sh\\\": permission denied\"\n","Handle":"","ProcessID":"","Binary":""}

Now I think about it…how to make the script executable so that the container can execute it…does the chmod command go somewhere in the task file…thanks a ton?

You are correct in your suspicion. That error is due to the script not being executable.

You have to make the script file executable in the git repo that the pipeline pulls from. If you’re developing on Linux or MacOS then its as easy as running chmod +x /path/to/file.sh then add/commit/push to git. If you’re on Windows then you will need to use git to set the permissions on the file with git update-index --chmod=+x /path/to/file.sh then commit/push to git.

Thanks a ton for your help…my tests are now working and now a have better understanding of how I can build some robust pipelines with Concourse…thanks a ton again.

1 Like