Hey folks (cross-post from https://github.com/concourse/concourse/issues/3543)
We have recently had a customer approach our team with an issue regarding concourse and tasks we publish.
Due to the nature of our backing up product, we often have large GBs of files being produced by our concourse tasks. Increasing the size of the workers to accommodate large files transferring between tasks doesn’t seem to be a solution that will scale for the customer. They also seem to encountering a problem where if a task fails, the files created are not ‘garbage-collected’ for x amount of time. These leftover files contribute to the previous problem.
We believe we are using Concourse idiomatically at the moment, but we would like to help the customer with this issue. Does the concourse team have any ideas for solutions/mitigations for the customer? The solution suggested was to create one big task but this doesn’t seem like a clean solution to us.
tldr; We have tasks that produce big files -> they are transferred between tasks -> runs out of disk space on workers -> any ideas?