What kind of topics would you like to see on our blog?

Hey there Concourse community!

As some of you know, I post a lot of content on our Medium blog https://medium.com/concourse-ci

Lately I’ve been a bit lazy about posting weekly updates because I’m not quite sure that they’re helpful / interesting. It was also to post weekly updates when we were just trying to get Concourse 5 out the door.

So, with that said, do ya’ll have any specific requests around content you’d like to see? More feature previews? UI overviews? Feature research updates? Would someone here be interested in writing a blog post as a guest?

Any thing around the topic of building and testing Microservices would be interesting.

Late last year I constructed and CI System using Concourse for this purpose using a Docker Stack (Swarm Mode) which ran DinD (Docker in Docker). That worked, however it was pretty slow. Recently I stated to wonder about using a “Service Broker” to create the Docker Stack in a Cloud (local K8 probably, perhaps using Cloud Foundry?) and then have a normal Concourse Docker build just connect to that Cloud Stack by setting some Environment Variable (for Redis, ZooKeeper endpoints etc) … and avoid all the DinD mess.

Perhaps the people developing Concourse have some techniques to share for achieving that? Or other approaches for testing Microservices (more than one Container), and perhaps managing DevOps style deployments?

2 Likes

Interesting! I don’t think we have much experience with microservices on the team but I could definitely ask around to see what other folks have done. I know some folks on the Spring team do some testing with concourse so there might be some useful info I can pull out of that team. Thanks for the feedbacj

I’ve got an idea that it could be possible to use Helm to deploy the Microservice Bundle (i.e. Redis/ZooKeeper/Mongo) into a K8s cluster as a Helm Chart, using a Helm Container with CLI, and then somehow get the Docker container from the Concourse Build to connect to that deployed Chart in the K8s cluster. Would then need a cleanup to delete the Chart (i.e. deferred/delayed task).

The Helm commands are not difficult, its probably possible to run them as Bash commands in a fairly simple Docker Container.

There are also Helm Charts for Concourse … so it could be a very neat solution, if everything is running in the same K8s Cluster.

I would see some “best practice of how to use concourse” from concourse team’s perspective. For example, some tests needs to launcher docker containers, but people usually concern about docker-in-docker, how to run such tests on concourse?

2 Likes

Hi James,

I just wanted to say that I really like the weekly updates, it’s great to hear about what the team is up to. But I totally understand they’re a lot of effort - how about posting something monthly instead?

Some ideas for interesting topics:

  • tips for optimising pipeline performance (e.g. caching options, container placement etc.)
  • elastic scaling (our setup is virtually idle for 8 hours a day but scaling up and down is hard)
  • deep dive into the architecture
  • comparisons with other tools

Thanks!

I miss the weekly updates, personally. :slight_smile:

1 Like

That’s a good idea…maybe monthly is a more sustainable cadence. Tbh maybe I should just dive back into it given that folks seem to actually enjoy the content.

I’m really struggling with this - how to setup a microservices CI process. Should I build in data to the database services and keep them running (e.g. some tests depend on a set of terminologies loaded into the database) - how to handle multiple services that need to be tested against each other but are based on separate repos.

I ended up having short pipelines for each Microservice, so that they could all build interdependently. Rational was that the overall build system is less brittle. The pipelines would trigger from Resources; update to a Docker repository; so a soon as one Microservice was built (and tested), any other dependent Pipelines could be triggered.

The Microservice build itself, for testing, used DinD and took whatever was latest from the Docker repository. So, if you have a suite of Integration Tests for that gaggle of Microservices then it would be OK.

If I had a large amount of data that was necessary for the testing then I would keep that running in a database somewhere. And I would have a Concourse Pipeline to build and deploy that too!

But anyway … using Docker in Docker, having short Pipelines, using the Repositories and triggers to push changes though the Build System … keeping failures localized so that the rest of the Build System still worked.

@trulede Thanks for sharing your approach - that makes a lot of sense.