Is there a way to trigger a pipeline to run from the other pipeline?


I have multiple pipelines, and I don’t want to merge them into a huge pipeline. But I still have a requirement that, say once pipeline1 finishes, I want to trigger pipeline2 to run. Is there any way to support that?


I think that the only way is to go through a resource. Say you have pipeline P1 that should trigger pipeline P2.

For example pipeline P1 as last step can upload a file, with a new version, via the S3 resource. Pipeline P2 can check for a new version of the same file, via a S3 resource with the same configuration.


MArco-m suggestion is good.

We had quite a few pipelines which linked together various resources. For instance, one pipeline would produce a Python Module and upload it to a PyPi Resource, then another pipeline would monitor that Resource and when triggered build a Docker Container and push that to a Docker Resource … and so on. Each pipeline was completely independent.

But of-course, sometimes it made some sense to do that kind of thing within a single pipeline too (i.e. some sub-module integration and testing before Publishing a resource). The documentation for that case is quite good, as were the examples.


through resources is the cleanest way. there is also a resource that allows you to call fly (and do things like fly trigger-job), if you are are adventurous


@ralekseenkov We are trying to write a resource type to trigger other job from a job. But the tricky thing is, we have to configure concourse url, username, access token, etc for the concourse resource, even if both jobs are on same concourse cluster. Is there a way to avoid that?


the resource already exists, you don’t have to write anything. assuming you don’t do cross-team job invocations, here is how you can do it today (and here is how we do it):

  • every team has one “service user”, which your jobs can use to interact with concourse api
  • credentials for that “service user” is stored in Vault (or you can use any other cred mgr)

then you just basically do:

- put: fly
    options: "trigger-job -j pipeline/job"

and define resource as:

- name: fly
  type: fly
    url: ((concourse-url))
    team: ((team))
    username: ((concourse_team_username)) # will be taken from Vault
    password: ((concourse_team_password)) # will be taken from Vault

and resource type as:

- name: fly
  type: docker-image
    repository: troykinsella/concourse-fly-resource

you can’t avoid passing url, team, username, password, but you can store them in credential manager for every team


Is there a way to monitor the output of the pipeline under test (the one that got triggered) to know if it suceeded or not? This could be really useful for testing pipelines (where the pipeline is the deliverable).


or at the minimum see if a pipeline successfully ran all the way though or failed from the command line