How to secure manual trigger for specific jobs in a pipeline?

I have some fairly typical pipelines that consist of many build and test jobs, followed by deployment jobs to dev, test, and production environments. The prod deployment job is configured to only run on a manual trigger. We would like the production deployment job to be secured such that only certain people in our company can trigger that. However, we want any developer on the project to be able to trigger runs of all the other jobs in the pipeline. The other jobs have automatic triggers, but it is a fairly common occurrence that we need to trigger a failed job again manually due to issues like an external server dependency being temporarily unreachable, etc.

TLDR: Is there any mechanism for securing the manual trigger for specific jobs in a pipeline, without locking out users from triggering other jobs in the pipeline?

1 Like

I don’t have an answer for this but I wonder if there’s a way to use a custom resource + webhooks to do this?

Or maybe a custom task as the first step in your plan that reaches out to some external secure resource to see if the task should fail (thus failing the job) or pass.

Really simple example would be having a task read the contents of a file in an S3 bucket. The task could pass if the file contents say true and fail otherwise. You could then secure the file in S3 (or whatever system you choose, doesn’t have to be a file in S3, can be anything you can control authorization over [git?]) and lock it so only certain users can modify it.

Hope that gives you some ideas to achieve what you want :slight_smile:

A separate pipeline controlled by a deploy team is right out?

@taylorsilva I’m not sure I follow your suggestion. The workflow would be something like:

  1. Write a file to s3 (or update a file from false to true)
  2. Kick off a deployment in Concourse
  3. Go back and delete the file in S3 (or change it back to false)

I guess you could have the pipeline perform step #3 to prevent accidents of leaving of it unlocked.

@hfinucane A separate pipeline controlled by a different team was my first thought. It just seems like a lot of work to set up, including passing all the versions of artifacts from the first pipeline to the second pipeline, unless I’m missing something about how to accomplish that. That’s why I’m asking here. I’m trying to figure out a solution that is simple and not error prone, and was wondering if there was already an established pattern for this sort of thing.