Trigger a job upon successful execution of previous 2 jobs

I am fairly new to concourse. I am trying to trigger a job upon successful execution of previous 2 jobs. JOB-A and JOB-B run fine, but JOB-C doesn’t get triggered as time resource along with passed condition will never match. Any alternative ideas ?

- name: JOB-A
  serial: true
  plan:
  - get: timestamp
    trigger: true 
  - put: host-A
    params:
      interpreter: /bin/sh
      script: |
        uname -a
        exit 0
    attempts: 2
  - put: timestamp

- name: JOB-B
  serial: true
  plan:
  - get: timestamp
    trigger: true  
  - put: host-B
    params:
      interpreter: /bin/sh
      script: |
        uname -a
        exit 0
    attempts: 2
  - put: timestamp
    
- name: JOB-C
  serial: true
  plan:
  - get: timestamp
    trigger: true  
    passed: [ JOB-A,JOB-B ]  
  - put: host-C 
    params:
      interpreter: /bin/sh
      script: |
        uname -a
        exit 0
    attempts: 2

I think if you combine the three jobs into one, then it should be easer to implement your logic.

No, I don’t want to combine them. If a sub-task fails in a job, I have to re-run the whole job (don’t want to do that).

Can you explain why the time resource would never match? I took your sample pipeline, modified it a bit to work, and I’m getting what you expect to be happening, based on how I’m understanding your question. Am I missing something?

resources:
- name: timestamp
  type: time
  source:
    interval: 30s


jobs:
- name: JOB-A
  serial: true
  plan:
  - get: timestamp
    trigger: true
  - task: fails
    config:
      platform: linux
      image_resource:
        type: registry-image
        source:
          repository: alpine
      run:
        path: true

- name: JOB-B
  serial: true
  plan:
  - get: timestamp
    trigger: true

- name: JOB-C
  serial: true
  plan:
  - get: timestamp
    trigger: true
    passed: [ JOB-A,JOB-B ]

Here’s the result. JOB-C only runs when both JOB-A and JOB-B are successful. When one of the job fails, JOB-C does not run:

You are missing -put: timestamp in JOB-A and B.When you add them, the timestamp resource gets updated after each job completion and it causes issue to JOB-C.

I’ve modified the pipeline to split JOB-C into 2 separate jobs to fix my issue.

I see; why are you using the put step for the time resource? Glad you figured out a solution, I’m wondering what you get from putting your time resource? You only need to put a resource if you changed.

It’s a little trick I learned. Using time resource to trigger jobs sequentially and run only once. Here’s how it looks like. Job-A will be triggered manually, and it will update timestamp resource, Job-B will be executed only once upon successful execution of JOB-A and it will update timestamp. At last, “Final-Job” will be triggered upon successful execution of Job-B. By having start and stop in my time resource along with passed condition, I was able to control the job’s to run sequentially and execute only once for each updated timestamp.

resources:
- name: timestamp
  type: time
  source:
    location: America/Los_Angeles
    start: 9:05 AM
    stop: 9:05 AM 
jobs:
- name: JOB-A
  serial: true
  plan:
  - get: timestamp
    trigger: false 
  - put: host-A
    params:
      interpreter: /bin/sh
      script: |
        uname -a
        exit 0
    attempts: 2
  - put: timestamp

- name: JOB-B
  serial: true
  plan:
  - get: timestamp
    trigger: true
    passed: [ JOB-A ]  
  - put: host-B
    params:
      interpreter: /bin/sh
      script: |
        uname -a
        exit 0
    attempts: 2
  - put: timestamp  
- name: Final-Job
  serial: true
  plan:
  - get: timestamp
    trigger: true  
    passed: [ JOB-B ]  
  - put: host-C 
    params:
      interpreter: /bin/sh
      script: |
        uname -a
        exit 0
    attempts: 2