Whagowan with ContainerD?

'ello folks,

What’s going on with the ContainerD support, and more specifically, why are y’all adding ContainerD support? I can see a lot of things on GitHub talking about making progress with it, but I’m a bit unclear as to the motivations.

I’d always naively assumed (I am a simpleton) that there would be more benefit using Kubes as a backend directly, and that using another container runtime directly won’t really help with things like auto-scaling, or convincing all the Cloud Native cool kids that Concourse isn’t a CI system for boomers.

Is the ContainerD support a vital step towards working more natively with Kubes? Or is it a stepping stone, driving out abstractions by supporting something simpler than Kubes whilst still not being Garden?

1 Like

Heyo, good question, this probably could have used a bit more publicity about the strategy.

First off: we’re starting on Kubernetes work this week.

The motivation for doing containerd first is…:

  • We’re not going “only Kubernetes” - Concourse is a tool for everyone, not just DevOps nerds who can afford to run and maintain a Kubernetes cluster.
  • This means that we’ll still have to support and maintain first-class support for a simpler way of running workers on e.g. your own VMs or hardware.
  • With the future of Garden somewhat nebulous as CF switches to K8s, we need a way to ensure this use case remains supported.

By switching from Garden to containerd first, we can accomplish a few things while setting the stage for Kubernetes work:

  • Independence from Garden.
  • Build up local expertise on Concourse’s containerization approach so that we can better understand the decisions we have to make while adding additional runtimes.
  • Leverage direct access to containerd so that we can optimize our containerization for Concourse’s use case, rather than CF’s (as Garden always has).
    • For example, by having privileged containers still use user namespaces, we can eliminate the longstanding performance overhead when using the overlay driver with privileged tasks/resources.
  • Switch from our bespoke image format (rootfs/ + metadata.json) to the OCI image format, which should help make the transition to Kubernetes easier.
    • Being OCI everywhere will let us make decisions like ‘upload the artifact referenced by image: in a task step to a registry and reference it from the task’s pod’.

From here we plan to work on Kubernetes in parallel to a couple more epics to wrap up the containerd track.

Hope this helps!

1 Like

Awesome, that’s really helpful. I figured it might be those sorts of things, and it’s really great to see the thinking behind it. May I strongly +1 the “Concourse isn’t just for DevOps nerds” thing.

1 Like