Storing build dependencies

Hi all,

I’m just starting with Concourse, so I apologize in advance if this was discussed before, but I just want to make sure I understand it correctly.

I have a task that uses docker-image resource, which pulls a public docker container from DockerHub ( and runs a powershell script.

Before I can run a script, I need to download some third-party modules. The problem is that the modules are not reused and are re-downloaded every time I run the task, which I assume is normal since the new temporary container is created on the worker during the task execution (from the cached original docker image).

Naturally, downloading dependencies every time the task runs is not ideal. From my research, it seems that the Concourse solution for this is to package the dependencies in a custom Docker image and store it somewhere locally (minio, artifactory, nginx?), then use this custom image instead of the public one. This seems like a lot of work for something that could be solved by a shared datastore (mounted on the worker using an iscsi or nfs).

So my question is - am I missing something obvious? For the one-off task run I can just use fly to pass the local directory with stored modules, but that’s not possible when you transition to a pipeline.

What are you using to store and reuse dependencies for the build? Thanks in advance for any clues!

Try the caches feature of tasks

It looks like the cache directory is per task, so it would not be possible to share it across multiple tasks, unfortunately.

I guess what I was looking for is a “shared datastore” way of distributing the state, but I suppose just building a custom docker image is not that hard and makes sense given Concourse architecture, so if that’s the best practice I will just use that.

Thank you!

Ah, sounds like you want more of a caching proxy. e.g. you would set up something that caches the packages you download from your package provider (e.g. npm)?

Yes, I was just trying to figure out what options are available in Concourse to cache the extra packages or modules from the package providers. For example, I’ve found this project from 2017 that deals with this problem -, but have not had time to explore it further.

As far as I understand, the most common way is to just package the dependencies into a custom Docker image and use that in your pipeline. That’s what I did and it works perfectly fine for my needs.

Also, I believe I’ve misread the documentation regarding the task caches, and it is possible to pass the cache directory from one task to another with output/input mechanism, so that could also be an option as you’ve mentioned. I think this page does as a better description of this option -

I suppose a caching proxy might also work.

Yes you totally can pass them between tasks in the same job.

I mis understood and assumed you wanted them to be available / shared among multiple jobs.