Having multiple jobs output to a single alert resource

I am currently working on to get a single notification for all the jobs that ran on a pipepline.
It seems like I cannot merge the output of different jobs into one output.
Can I get some directions on how to do this?

Thank you.

Welcome to the forum!

To my knowledge the only way to get all jobs to feed into one put to an alert resource is to make a new job that follows the jobs you want the notification from. To do this you would need to have at least one resource that you get on each of the jobs which is then passed to the alerting job.

Something like:

jobs:
- name: do-thing
  plan:
  - get: resource
  - task: do-thing
- name: do-thing2
  plan:
  - get: resource
  - task: do-thing
- name: alert
  plan:
  - get: resource
    passed:
    - do-thing
    - do-thing2
  - put: alert

If each task produces unique output that you want to emit you would need to put that to a resource then get it into the alerting job.

Thank you for your reply.

I was thinking the same approach but I was not sure if the status of the completed jobs (like success or fail) can be passed around. What kind of resource should I use for this? (I was thinking keyval resource but was not so sure how to configure the resource_type and resource in the yml file)

If you want to know if all the jobs have passed then what I wrote above should work out of the box regardless of what resource actually is.

- get: resource
  passed:
  - do-thing
  - do-thing2

reads as "Get the latest available version of resource that has passed both do-thing and do-thing2"

Alerting for a failure with a single put would be harder to implement because the failed jobs would block the pipeline from progressing.

As an alternative, the usual pattern would be to leverage on_failure and on_success on the jobs to do individual alerts. You could then use yaml anchors to simplify your pipeline.

What I am trying to do is actually to alert for failures of the jobs in one output resource.
I was thinking to have put in the on_failure hook for each jobs that passed the exit status of the jobs to the alert merging job. Then when all of the jobs passed their exit status, it outputs to the alert resource.

But if I am understanding you correctly, if a job fails, and even if we put some output to a resource in the on_failure hook, the dependent job of the failed job (like merging alert job) will not be executed for the previous job’s failure?

This is correct. on_failure only triggers on the failure of the immediately preceding step in a job plan. What you’ll probably have to do is pass the exit status (pass or fail) to a resource at the end of each job then concatenate those statuses in your alerting job.

For this type of scenario I’ve been using signals/semaphores (s3 file or git path). The idea is that you have multiple jobs doing stuff and writing to some buffer-resource. When it’s time to notify there’s a put for the signal resource. The notify job triggers based on a signal-resource, gets the buffer and puts the notification.