Concourse volume is too large, and doesn't reset to 0 after deletion


I’m not sure if this is a bug, but the concourse container I have is behaving weirdly. Its volume is taking up space, and after I delete the volumes, the space it took does not go down. It doesn’t start fresh.

I’m using the docker-compose.yml file that’s shown in the official docs for integrating with vault:

Now I’m consuming 20GB of disk space after running 100 or so builds, so I delete everything on the server related to concourse.

docker stop e_commerce_ci_concourse_1
docker stop e_commerce_ci_vault_1
docker stop e_commerce_ci_concourse-db_1

docker rm e_commerce_ci_concourse_1
docker rm e_commerce_ci_vault_1
docker rm e_commerce_ci_concourse-db_1

docker volume prune

Then reboot the server. I now have 19 GB of free space.

When I run docker-compose up -d to get back my concourse though, after I finish the setup, doing this is my output:

docker system df -v

VOLUME NAME                                                        LINKS               SIZE
d30b7480b520fe21ffbf255ec923ac720f128d32335d0933441f84e6cb61ba9a   1                   0B
2c89e6293378e91eeecc4917ef4945469482659cb2c3037d935554527f93ac49   1                   0B
1895bb84baca711c8e20067e995d1aba1eab14b8105272109d0d8aef294bcbaf   1                   24.01kB
27cb56b087fc3e53af16a75dfd0996ec687f575be9192aea62e6b848fb17cf10   1                   15.22GB

I literally just deleted everything and finished restarting the services, but the volume it created shows it’s consuming 15GB. That’s the same amount it had before I deleted it!

That volume consuming so much space is attached to the concourse container, which I can find out easily by trying to docker volume rm <volume-id> and it says I can’t do that because it’s being used by a container with an id that matches the concourse container id.

Doing df -h gives me:

Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       25G  6.5G   18G  27% /

So there’s a mismatch between what docker reports it’s using and what the system sees is free space. However, after doing two builds of one pipeline, a df -h now gives me:

Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       25G   14G   11G  56% /

It’s worse than it seems, because the system quickly catches up to the space that the volume says it’s consuming.

My worry is that as my pipeline keeps running, it’s going to consume more and more space that I can’t get rid of even if I wipe everything clean unless I get a new server - but I don’t want to have to do that. I should be able to wipe the disk if necessary, especially since concourse is pretty much stateless.

Why is the concourse container “remembering” its volume size before the delete happens? How do I get rid of this data?