How to use `try` with `get` when that leads to `inputs` being unsatisfied?


#1

Greetings–

I’m using the Concourse pipeline resource to deploy autogenerated pipeline yaml files.

Among the things that it does is watch our source repos for changes that should lead to updated pipelines. For instance, if a new release branch is created, then it will automatically deploy a pipeline for that new branch.

This is all working really well.

But then there’s a snag. What happens when a branch is deleted? Since there is a get for each whitelisted branch, the check step fails and the get step also fails. To work around this, I did a “try get”.

The try+get worked in so far as it let the pipeline run. However it came to a screeching halt because the inputs were not satisfied.

So, my question is how can I get past this?

Thanks.

–Harley


#2

Are your branches considered versions? If so, versions are immutable in concourse. You cannot delete resource versions, currently.

edit: more context:






#3

Thanks for the reply.

My branches are not versions in the concourse resource sense. My branches are pipelines.

So, when a new branch that begets a pipeline is created, either master of a new repository or as a whitelisted branch in an existing repository, my pipeline job comes along and creates the pipeline.yaml necessary.

Part of this pipeline.yaml is the resources: block that refers to the new git branch. Another part of the generated pipeline.yaml is the inputs section that uses the generated resources block.

Then, after the script runs that uses all the inputs, the concourse pipeline resource updates the pipelines associated with every branch. All this works well when I create a new repo/branch in git.

However, if I remove a repo/branch, the pipeline fails. It fails because auto-generated resources: block for the git resource can no longer be satisfied – after all, the repo or branch was deleted. I attempted to work around this by putting the get: step in a try: block. This helped a bit in that the build was no longer blocked waiting on the resource, but my task still could not be run because the inputs for the deleted resource could not be satisfied.

It seems incorrect to say that an input that is labeled try must succeed in order for the task to run. I’m wondering if anyone has any thoughts on that.

I’m also wondering if anyone has ideas to work around this issue.

UPDATE: The error I’m seeing in the task is missing inputs: <resource-name>

Thanks!

–Harley


#4

We do the same (one pipeline per feature branch).

We have a “reaper pipeline” that runs periodically, queries the ATC and does a “destroy-pipeline” for all pipelines without a backing branch.

To avoid doing a git clone to check for a given pipeline, we use the API offered by bitbucket or github and we ask if a given branch exists.


#5

Thank you!

So it sounds like you’re not using the Concourse pipeline resource as the only way of interacting with the concourse deployment from within the pipeline.

We’re also using an API to check if a pipeline should exist. In our case we use it to both create the pipeline.yaml for the new pipeline and to begin watching a magic directory in the repo to know if we should redeploy CI or not. It is the latter that is causing the problem – if we didn’t trigger based on changes to CI in the target repo:branch, then we would not have the on-delete problem. Perhaps the better solution is to just redeploy on a schedule in an idempotent fashion.

Unfortunately this means that we have to either refactor our script into a resource or give secrets to a task, neither of which fit in very well with the Concourse model. It’s certainly a way forward though.

–Harley


#6

We do the following:

  • we strive to use a single pipeline template both for the master branch pipeline and the feature branches pipelines
  • we use the pipeline resource ONLY for the master branch pipeline
  • for feature branch pipelines, we invoke fly set-pipeline via a simple script that checks the current git branch and some other configuration. So for example if the project is called pizza and the feat branch is called olives, the pipeline will be named pizza-olives.
  • similarly, the master branch pipeline is always called PROJECT-master.
  • if it is not possible to use the same template both for master and feat branch pipelines, we use spruce to merge two YAML files. We try very hard to stay declarative and not to introduce a full fledged templating language.

So workflow for dev working on branch foo:

  • create branch foo and push
  • call helper script to set the feat branch pipeline
  • iterate
  • merge
  • done, the reaper will destroy the pipeline

Warning: if you go this route, depending on how many feat branch pipelines, you will start to consume MANY resources. You need the reaper pipeline before making this approach live or your risk instability


#7

We do the following:

  • Avoid use of branches in the primary repository as much as possible; Instead use PRs, CI to check the PRs, and branch protection
  • The primary repo describes how CI should be done. We have a few reusable patterns and the ability for the repo to define its own pattern
  • We also use spruce to construct our pipeline files. Even when a pipeline is a one-of-a-kind, we still reuse common definitions via spruce.

Our workflow for dev on a branch is:

  • Dev forks, makes changes on his own feature fork+branch
  • When ready to run CI on it, opens a PR but does not request review
  • Iterates
  • Request review, iterates some more
  • When approved (CI driven technical checks and political considerations), repo maintainer merges
  • CI runs again post-merge to deliver a new last-known-good artifact

So right now, we aren’t huge users of branches. Mostly we want them for releases / release candidates and some other miscellaneous work. But we are very interested in making sure our PRs are checked. I definitely agree with you wrt resources. Our integration tests especially are expensive – standing up the full microservice application with a test driver via helm. But we’ve found that with a pool of k8s namespaces and a relatively quick integration test it’s still manageable.

Thanks for the opportunity to compare notes