As long as you trust your Concourse’s availability it definitely is suited to run things when things change.
It has retries, conditional execution, composability, visibility, … In terms of flow control you should be covered.
It’s very easy to have a job run after another if it passed. If they’re in the same pipeline is trivial, if they’re in different pipelines you just bridge them with a resource that is output for one and input to the other, typically for some metadata about the job results.
Jobs themselves can be arbitrarily complex in terms of orchestrating the getting of data, processing it and putting it somewhere else: you compose parallel, serial, try, on_success, on_failure and all that as you wish (mostly). Your jobs can run in specific instances of its workers as they get matched on tags that you can apply to both so you can isolate the throughput-oriented from the latency-oriented.
Granted it’s not Mesos, Yarn or Airflow but it sure can run your scheduled jobs and perhaps avoid the added complexity of bringing another tool to the table just for them.