I’m just starting with Concourse, so I apologize in advance if this was discussed before, but I just want to make sure I understand it correctly.
I have a task that uses docker-image resource, which pulls a public docker container from DockerHub (mcr.microsoft.com/powershell:latest) and runs a powershell script.
Before I can run a script, I need to download some third-party modules. The problem is that the modules are not reused and are re-downloaded every time I run the task, which I assume is normal since the new temporary container is created on the worker during the task execution (from the cached original docker image).
Naturally, downloading dependencies every time the task runs is not ideal. From my research, it seems that the Concourse solution for this is to package the dependencies in a custom Docker image and store it somewhere locally (minio, artifactory, nginx?), then use this custom image instead of the public one. This seems like a lot of work for something that could be solved by a shared datastore (mounted on the worker using an iscsi or nfs).
So my question is - am I missing something obvious? For the one-off task run I can just use fly to pass the local directory with stored modules, but that’s not possible when you transition to a pipeline.
What are you using to store and reuse dependencies for the build? Thanks in advance for any clues!