How worker restrict container's resource consumption?


Before put our Concourse cluster into production, I’m thinking, actually worrying about, one issue. As user may run their own docker images, if some user runs a monster maven build, which will consume 10GB of memory, which will eat up a worker’s node’s memory, if several users are doing same thing, then entire Concourse cluster may stop working.

Does Concourse have any mechanism to limit single container’s resource consumption? So that normal tasks won’t be effected by those monster tasks.


From my understanding, concourse uses runc, and runc does have something called runc update which lets you specify resource constraints:

However, I don’t believe concourse currently exposes or controls any of these settings.

For your situation, one possible solution is to use multiple workers of various resource levels, and tag them, so e.g. if you know a monster maven build needs 10gb of memory, create a worker with sufficient memory and tag it (e.g. ‘big worker’) and then specify that those jobs are only allowed to run on those workers.


I see two option of concourse web that are for resource limitation:

--default-task-cpu-limit=         Default max number of cpu shares per task, 0 means unlimited
--default-task-memory-limit= Default maximum memory per task, 0 means unlimited

With setting these 2 options, we should be able to prevent monster containers to eat up worker resources.