How do folks deal with running tasks that require obscene amounts of RAM? I’ve been working with a customer who has a legitimate need to run tests that spin up headless browsers and mobile emulators, and so require many GiBs of RAM.
As I understand it, there’s no way to tell Concourse the compute requirements of a task. Hence sometimes all these massive tasks get lumped onto the same worker, whilst (for example) 250 lightweight check containers are on another worker with oodles of spare memory.
Is my understanding incorrect? It normally is.
- How do folks with similar requirements manage this? Workers with different characteristics, and tagging jobs that need all the RAMs?
- Are there any plans to implement compute-requirement-based scheduling? That sounds like quite a hassle.
- Could this be added if Kubernetes-as-a-Container-Runtime is implemented? If containers were scheduled as pods on Kubes, then presumably the clever scheduling in Kubernetes could be leveraged instead.