Concourse resource caching


We are using s3 as our artifact repository. All our build pipelines generate the deploy-able artifacts and push to s3 bucket[non-versioned]. we have 2 repositories repo-1 and repo-2, building these 2 gives out a set of artifacts with naming convention As Repo-2 is always depends on Repo-1 Some times as there is a code change in repo-1 we have to build the both repo-1 and repo-2 code bases to generate latest artifacts.

Problem is we have generated the artifacts by building repo-1 and repo-2 and the output is
where first are generated by repo-1 and 4th one by repo-2.

Now due to change in repo-1 we are forced to rebuild both repo-1 and repo-2 this time the artifact output would be

so if u observer both the outputs only 4th artifacts name hasn’t changes as there is no commit/changes to repo-2.

Observed : when we deploy the 4th artifacts via pipelines second time we observe concourse is not downloading the latest artifact. How I can say this is because we don’t see the changes that need to be reflected in 4th artifacts due to changes in repo-1.

I tried check-resource command manually before deploying but no use.
How can I force concourse download the s3 artifacts every time it deploys the build with same artifact name, which actually has a change in it.

Concourse resources expect versions to be immutable so once it detects a version it will assume the object tied to that version will never change. This is true for every resource - not just S3.

There currently isn’t a way to clear that cache via fly or the UI. I think there are github issues about this but I can’t seem to find them right now. Your only real options are to either change your build naming system so that different builds always have different versions or to recreate the workers every time something like this occurs.