Is it possible to Group Workers so they can only run certain pipelines?


#1

The situation is im dealing with ~10 AWS accounts, that each have a mangement VPC inside them, that can reach any number of Peered VPC’s inside those accounts. I have to many VPC’s that would require peering, to peer all of them back to a central VPC.

I was wondering if it was possible to do something like, deploying my web nodes in a main VPC, and then peering that VPC to the MGMT vpc’s in the other accounts, and run worker nodes in each of those VPC’s. That way, my worker would have network level access to run ansible etc against resources in those VPC’s peered to the local MGMT vpc. I think this would work, except I would need a way to say ‘workers in group A’ only run these pipelines, while ‘workers in group b’ only run some other set of pipelines.


#2

You have two approaches:

  • give tags to nodes. You do this by adding the tag to the command-line of concourse worker (--tag=) or via env variable CONCOURSE_TAG), and specify such tags in each pipeline. See https://concourse-ci.org/resource-types.html#resource-type-tags, https://concourse-ci.org/tags-step-modifier.html. This can become bothersome.
  • Use a Concourse team per group, and allocate workers to the teams. You do this with --team= for the concourse worker command-line or with the CONCOURSE_TEAM env variable. This has the big advantage that you don’t have to modify the pipelines. The dispatching will be done based on the team the pipeline has been set.

#3

@marco-m

That setup appears like it would work fine. My next question, since I cant seem to find it documented (I could be missing it) could a Pipeline from ‘Team A’ notify a pipeline from ‘Team B’. As you can imagine, dev’s push code in 1 account, we build, then if successful we would want to run some QA in a different account and down the line.


#4

We do something similar at work with AWS. I forgot the details but what we do is to use a S3 bucket as the rendez-vous point. The bucket belongs to one of the two AWS accounts. A pipeline in account 1 has permissions to write artifacts to the bucket, and a pipeline in account 2 has permissions to read from that bucket

We use the Concourse S3 resource. What I remember is this: since S3 is a protocol implemented by various IaaS, the Concourse S3 resource does not, on purpose, support AWS roles, because the pipeline then would work only on AWS (and after a while I realized it makes sense). Instead, we store the S3 credentials (access_key_id, secret_access_key) in SSM and refer to them as Concourse ((params)).

This might look hackish, but you have to remember that S3 (or equivalent) is the classic Concourse channel between even two jobs in the same pipeline, so using the same channel between two pipeline is, in my opinion, natural.


#5

@marco-m

I created a new thread, since it was more involved than this discussion, but I created a PoC for the above discussed deployment and cant quite get the workers to show up.