Running out of volume space on workers

I am running Concourse deployed to Kubernetes with each worker having a working dir size of 30GB. Now that file system is 100% used. If I exec into the running docker container I see a filesystem mounted on concourse-work-dir and in that directory we have the directories 3.9.0, depot, volumes and the file volumes.img. In our case the entire mounted concourse-work-dir is 30GB. The volumes.img file seems to “claim” 20GB (however only use 6.9GB) while the volumes directory cointains a subfolder called live in which we see a bunch of “artifacts” of old containers containing their own volumes which, for example, can be the file system of our images that we run tasks in but also some actual downloaded resources (for example a full git repository). So it seems as though Concourse has the image of the volumes directory as well as the volumes directory (in which the image is mounted?) inside the same directory which sort of makes it take up twice the space. When we run it in K8S all we specifiy is the 30GB. Where does Concourse decide to make the .img file 20GB?
And regarding the “cached” containers, does anyone have a strategy for curating this (cleaning it out) once in a while do avoid reaching a full disk at some point?

We are using Concourse on BOSH and our fix sadly was to delete and redeploy Concourse… or you scale disk…