Pipeline for Yocto/Bitbake builds?


I would like to setup a Concourse pipeline that will let me run Yocto builds.

A clean build takes about 4-5 hours with my 2.5Ghz i7 blasting at full capacity the whole time. It keeps a work directory that uses about 60Gb of disk space. If you keep the work folder around subsequent builds are considerably quicker.

How can I setup a pipeline that clears the work directory like only in the weekends?
Can I auto-scale a high powered worker just before a Yocto build starts somehow? Our other jobs are much, much more modest and I don’t want to waste heaps of money on keeping a high powered cloud instance on we only use a couple of hours a week.

All tips and tricks a very much appreciated.



this is a though one! Maybe you can try a mix of approaches:

1 Like

caches look like they might work for OE build dirs. @erikhh – did you ever try this?

Good information – definitely helps point me in the right direction.

For this particular task, AWS EC2 instances work better than Digital Ocean, because you can preserve the instance disk state of a EC2 instance, but power it down. With EC2, you are only charged for the disk space, not the CPU time if the instance is powered off.

With DO, it is all or nothing – you get charged whether the instance is running or not. If you destroy the instance, then you lose the disk as well.

We could cache the build on a DO block-storage volume. I guess with running all tasks in a container, the actual state of the instance would not matter a lot, but it would still be nice to a have persistent EC2 workers so you would not have to install docker, download the needed docker images, etc for every build.

Sure; I was not proposing to use Digital Ocean for this particular task; I was proposing to take inspiration from that approach and apply it to AWS, the same logic can be used (I think, I didn’t validate it fully).

but it would still be nice to a have persistent EC2 workers so you would not have to install docker, download the needed docker images, etc for every build.

If you are talking about Concourse workers, you don’t have to install Docker, the workers do not use Docker, they use Garden, which uses runc.

If on the other hand you are talking about installing Docker to run the Concourse worker, then (and this applies to everything, not only Concourse) what I suggest is to take the immutable infrastructure approach and bake your custom AMIs with Packer. Then the EC2 instance is always ready to go, nothing to install.

This is a misunderstanding. The Docker images are downloaded only once per worker, they are then cached transparently.

@marco-m – thanks for the clarifications.

Is a task cache stored in the worker host file system, or a Baggageclaim volume?

A task cache is stored as a volume. Your question makes me think you want to work around the Concourse model. I suggest to embrace the Concourse model instead, it is the reason of existence of Concourse :slight_smile:

Another option, but this requires a certain experience with Concourse and understanding of the trade-offs, is to bypass containers and run on https://github.com/vito/houdini. But all warranties are void in this case.

Just wondering if a concourse volume can efficiently handle a 20-60GB build directory – maybe it can … :slight_smile:

I haven’t had time to have an actual go at this yet.
But in the mean time I’ve read on a mailing list somewhere that there’s a possibility to let Yocto store sstate in S3. I think I’ll go explore that route (whenever I get around to it). Storing the relevant state externally is very much in line with the Concourse philosophy I think. Should make it all a pretty straight forward matter of just setting up autoscaling for the workers.