I’ve come across an issue which is either a bug in concourse, or a misunderstanding on my side of how resource passing to jobs works. Essentially, I have a job pipeline structure like this:
resources: - name: sync ... jobs: - name: job-01 plan: - get: sync - task: ...(produce sync/file-01)... - put: sync - name: job-11 plan: - get: sync passed: [ job-01 ] trigger: true - task: ...(produce sync/file-11)... - put: sync - name: job-12 plan: - get: sync passed: [ job-01 ] trigger: true - task: ...(produce sync/file-12)... - put: sync - name: job-21 plan: - get: sync passed: [ job-11, job-11 ] trigger: true - task: ...(write sync/job-21)... - put: sync
In English, essentially I have a collection of jobs (j
job-21) which all work on a resource named
sync. Some of the jobs, namely
job-12, can/should be executed in parallel. The resource
sync is an rsync directory (type “rsync” from mrsixw/concourse-rsync-resource). Each of the jobs produces a uniquely named file inside
file-21). As a real world example, this pipeline serves as a compiling system for a piece of C++ software with many dependencies. The
sync resource is the output folder, gradually getting filled with all its library dependencies.
What I would expect is that at the end of the pipeline,
sync would contain 4 files:
What is happening instead is that my final result only contains
file-21, completely bypassing the
job-1x series of jobs. It turns out that this isn’t by some accident: concourse explicitly shows me that all the jobs 11, 12 and 21 all receive the same version of
sync, namely the one produced by
job-01. In other words, despite the
passed: [ job-11, job-12 ]-requirement, resource
sync passed to
job-21 never actually passed through job-11 and job-12.
I’m using concourse vanilla version 4.2.2.
Does anyone have any ideas why? Is this a bug, or do I misunderstand something?
If this is intended behavior, can anyone help me understand, and possibly give a hint about how to implement the desired behavior?
Thanks & Cheers,