Gitlab CI vs concourse


#1

Hi,

I am looking comparison of gitlab CI/CD vs concourse. I didn’t find any good. Maybe even more important is future of concourse vs gitlab CI/CD. I have experience only with concourse.

If somebody have experience in both I would like to read his/her opinion :slight_smile:


#2

Stay with the light side of the force :wink:

No i haven’t done a comparison by myself but i think this would be a very nice idea for a blog post. I plan to write a post about a CircleCI / Concourse comparison.


#3

@gdenn Can you say a few words here about CircleCi vs Concourse?


#4

not yet, i am still getting into Circle and i am not comfortable to already make a comparison :slight_smile:

But we are at a point where we cannot just turn our back on other CI/CD solutions as our clients demand a justification why we should use Concourse.

Nevertheless i am a Concourse fanboy, love the community, love the freedom we have with it.


#5

How about that topic? It would be great to read / watch somebody who has experience in both.


#6

Here is my experience with Drone.io and Jenkins compared to Concourse CI. I am not trying to compare all features, but here are some:

License

  • Drone.io -> Apache License v2 (Community Edition). Drone Enterprise Edition (the “Enterprise Edition”) is licensed under the Drone Enterprise License.
  • Jenkins -> MIT
  • Concourse CI -> Apache License v2

Pipeline definition:

  • Drone.io (until 0.8) -> Declarative pipeline definitions using YAML. Supporting services (DBs, dependend containers etc.) are supported. Pipelines and code are in the same repository. Pipeline changes require a commit. If something goes wrong you will have to commit the fix. If you change the pipeline definition in one branch of Drone.io this change is not visible to other branches until you push the pipeline definition changes to the other branches. Drone.io uses GitHub webhooks to kickoff the build process. Every pipeline has to run from start to end.
  • Jenkins -> Immature declarative pipeline definitions. Meaning, Jenkins support for pipelines is still very young and very limited.
  • Concourse CI. Pipeline definition are written in YAML. Pipelines have resources and jobs. Each job consist of a number of tasks. Pipeline changes have to be committed via Fly CLI and fixes do not require a new commit each time, which allows developing pipelines fast. In addition, jobs in a pipeline can be rerun if they fail.

Concepts:

  • Drone.io (until 0.8) -> Pipeline with steps. Steps are often times as big as an entire job in Concourse CI. Allows to define additional support services.
  • Jenkins -> Pipeline with steps.
  • Concourse CI -> Pipeline with jobs and resources. Jobs contain tasks.

Server model:

  • Drone.io -> Master and workers.
  • Jenkins -> Master and workers.
  • Concourse CI -> Master and workers (there are more components like Garden and ATC etc.).

Server authentication/authorization:

  • Drone.io -> GitHub
  • Jenkins -> I think multiple, but by default users have to be created in Jenkins.
  • Concourse CI -> Multiple at the same time.

Secrets Management:

  • Drone.io (v0.7 and v0.8) -> Secrets are obfuscated to the viewer, but stored in the database. Generally, hard to manage.
  • Jenkins -> Do not remember the details here.
  • Concourse CI -> Supports various secret backends, does not store the secrets in the DB.

Plugins:

  • Drone.io -> Not well supported. Only able to write plugins in Go.
  • Jenkins -> Hundreds of plugins, hard to figure out what you need.
  • Concourse CI -> Well supported and many features are already build in (AWS ECR support or secrets backends etc.)

Developer team size:

  • Drone.io -> Not sure, but only one person usually responds
  • Jenkins -> large
  • Concourse CI -> Seems to be a couple of full-time developers from Pivotal, but the community contributes as well.

Community:

  • Drone.io -> Changed a lot in 2018. Went from a forum to Reddit to Slack.
  • Jenkins -> Large community.
  • Concourse CI -> Smaller community compared to Jenkins, but very determined people.

Kubernetes Helm Chart support:

  • Drone.io -> Yes.
  • Jenkins -> Yes.
  • Concourse CI -> Yes.

Documentation:

  • Drone.io (until 0.8) -> Often times outdated an hard to find.
  • Jenkins -> Good documentation and many third party blogs and resources.
  • Concourse CI -> Most features are documented. Some third party tutorials exist. However, the documentation could be better, especially third party blog posts would be nice to have.

Stability:

  • Drone.io -> Changed a lot in 2018, with many breaking changes.
  • Jenkins -> very stable.
  • Concourse CI -> stable.

Price:

  • Drone.io -> Free for small teams (5 users) otherwise starts at $ 500 a month depending on your team size.
  • Jenkins -> Free
  • Concourse CI -> Free

One thing which is not obvious by just comparing the products is the fact that Concourse CI is very well designed. The internal desgin is based on the idea of keeping the core small. Authentication and secrets management are not directly part of Concourse CI. However, Concourse CI makes it easy to integrate with other services for both secrets management and authentication/authorization.

Furthermore, Concourse CI has very well designed concepts and keeps many aspects abstract. It really does a good job in the level of abstraction. The pipeline, job, task and resource concepts are easily understood, but at the same time they are not over abstracted. With other words they make things easier by still being very flexible.

As for the documentation and community. Concourse CI has several companies using it and a smaller, but very engaged community. The documentation is good, but especially third party blog posts are rare.
Drone.io does not do a good job with the documentation and the community consists of one core developer and a couple of fan boys. With other words not a very healthy community.
Jenkins is a large project and is around for a long time. It has both a large community and a lot of documentation. However, the declarative pipeline support in Jenkins lacks both a large community and good documentation.


#7

I think you’re now a far from Jenkins … Jenkins pipeline support is pretty good with balance between declarative and scripted (it can be mixed). The big thing here, is that it uses a pseudo-Groovy intepreter. Writing in pure Groovy style makes you often fight with CPS. But it has advantage to be close to code. Sometime it’s better sometime not. But by experience is more often better being close to code than external. As long as you have good design and makes external, external things that can change (i.e. infrastructure, URLs, credentials, etc.)

Jenkins main problem is the master. It delivers Web UI, manages slaves, dispatch works, store job data and metadata, … It scales very badly. Changes are in progress such as pluggable storage. There’s also some “work” (design ?) about a microservice-like architecture … As always with Jenkins things are continuously changing …

Other problem I have in Jenkins (and many other tools, including Gitlab CI), it lacks for real “pipeline” support. I mean you can generally cut your “pipeline” into sequential stages and only doing parallel inside them. While Concourse offers fan in/fan out : you can consume produced resources as they are available. It can really speed up your pipeline as you can start long checks very soon and join at latest moment.

My final problem regarding Jenkins is plugin/extension. In facts, there’s many problems there because Jenkins have many ways to offer integration of new tool/capabilities.

So, let’s start focusing on plugins. Plugins are install once for all. Meaning it can sometimes break something that some people don’t want (rare but it happens), or different teams may need absolutely different versions. Without speaking some incompatilities between plugins version … And only Jenkins admins can manage plugins (install / update / remove) generating a lot of work, regression testing, etc. Plugin management is really painful.

Let’s speak about tools (e.g. JDK, Node, …). Jenkins add many strategies over the years. First, were snowflakes workers : with preinstalled tool that you need to manually manage or use additionnal tools such as NVM or RVM. Then come Tools (and Custom Tools Plugin), that let you install new tool (or new version). But you have to manually manage decomissionning (i.e. prune) or your worker disk will be filled by tools over time. And more “recently” Docker integration with at least 3 different plugin and way to integrate.

On the opposite, Concourse rely natively on container both for jobs and resources. More, image and their versions are managed by each team. Making each one choose the best one for their needs. And Concourse clean-up containers and images.

Now storage management under Concourse is not satisfactory. Handling NPM/Yarn/Maven/etc. cache/local repository is hard and painful. it’s because you have few access to storage management. I know only three wayt to “manage” storage: resource, job and cache feature. None meet the needs. Jenkins let you access slave local storage. Gitlab CI have a “dynamic key”-base cache that let you better define how such data is shared.

Another problem I have with Concourse is to share piece of code/pipeline. I have successfully PoC-ed an auto-updating shared pipeline : https://github.com/loganmzz/concourse-presentation-introduction/tree/master/src/pipeline/05-Indus/02-autoupdate/indus-common. But don’t know how to provide some customization. May be some templating is required to manage it. But even if that, pipeline is basically just a workflow, and sometimes flow changes at some conditions. Scripted-nature of Jenkins pipeline shine a lot in this domain. I assumed some dynamic behavior can be introduced into Concourse pipeline through resources but it’s still very limited.

To complete this pseudo-review, I need to speak about secret management. Jenkins has credentials that can be scoped and store many different type of secrets: user/password, text, file, SSH key, certificates, … And Concourse integrates with external dedicated tool : Vault, Credhub, Amazon SSM and Amazon Secrets Manager. I never looked if Jenkins can also integrates with them … But I’m curious about it.
As far as I know Gitlab CI has only variables to support it. Not the best way to handle such data I think.

I have said no word about community/manpower but it seems obvious. Jenkins is there since many years with thousands users finding help/support and guys to work with you is very easy. On the opposite Concourse is very limited. I think Gitlab CI will also be better at this point.

A final note, I forgot to speak about documentation. I never used Gitlab CI (and so never read the documentation). For Concourse, there’s some behavior that’s not written in documentation but nothing terrible ; main part of DSL is documented enough and resource/task are just using images. For Jenkins, the main problem is the documentation is not versionned (only latest one is available). Most plugins are not well (or not at all) documented … Even the step reference doesn’t always describe well all parameters or how to format them.


#8

My org uses circle.ci and I’m switching us to concourse.

circle advantages:

  • builds the commit using that version of the pipeline (concourse requires an explicit set-pipeline, meaning you cannot correlate the commit version to the pipeline config)

  • extremely powerful caching support based on keys (easy to re-use cache from a PR build for your master build, for example)

  • easy to re-run jobs without cache

  • sidecar services, launch multiple docker images alongside your workspace image (e.g. for local postgres or redis)

circle disadvantages:

  • pipelines correlate to repos, want 5 pipelines? you need 5 repos

  • everything must start in a base docker container workspace, so you use a lot of “docker in docker”

  • due to “docker in docker”, and using “remote docker” machines, it’s impossible to mount local volumes into the child docker containers

  • it’s a hosted service that has a lot of other hosted dependencies, we’ve seen numerous outages over the past year

  • difficult to run locally for testing, but possible

  • expensive if you need a lot of parallel work. sometimes takes a minute or two just to get a remote docker machine allocated for your task, meanwhile that task is using up your “container limit”, so when there’s a rush of builds it can be a WHILE before other builds get through, unless you want to pay a bunch for more build containers (compare this to concourse where you can run as many containers per host as you want, in theory, until you need to scale up hosts for resource contention)


#9

@eedwards-sk Thanks for the review. Especially valuable for me is this part:

This is exactly, why I did not want to use a hosted service in our environment. Now, I have a good argument if somebody asks.

Furthermore, did you figure out how to do sidecar service containers in concourse?


#10

Furthermore, did you figure out how to do sidecar service containers in concourse?

No, it’s a gap in concourse today.

see https://github.com/concourse/concourse/issues/324


#11

Thank you for all your comparisons.

Can somebody share experience about https://argoproj.github.io ? It looks interesting.

I can add a few words about Concourse from my point of view. I will write mainly about disadvantages form subjective point of view, because it is the most valuable when compare:

  • I found Concourse stuck when you will do git push --force and change first commit in branch. Then the only one way to fix it is to remove pipeline and create it again in Concourse. The issue is Concourse can’t see any new commits then, because it is looking commits after this first one which doesn’t exist anymore.
  • It is cool CD tool, but not CI. At that moment there is no user friendly way to show branches reports in UI. So when developer create branch new-feature it is hard for everybody to know if this feature pass the tests or not at current state and see history of that. But I know Concourse team want to change it.
  • Migrations from old ver. Concourse to new one brings issue and you probably will lost history, because of start from empty Concourse DB and have to recreate everything, which is quite easy. But this one is probably issue about all CI CD open source solutions maintenance.
  • Clicking tasks in UI is frustrating. At least using touchpad on OS X in Safari. I didn’t test it on something else. You really have to focus to not moving cursor when clicking even 1mm. Otherwise it wouldn’t open task.
  • If you want to make pre-build images with all dependencies to boost up time of processing new commit Concourse doesn’t do good job about cache. Especially about multi-steps Dockerfile build. But you can always push pre-build Docker image and use it in that way. But it increase complexity of the pipeline.
  • Personally I don’t like new official documentation. I always can’t find what I want. I am clicking “randomly” on links a few times and then I am where I want to be :slight_smile:
  • I observe Concourse a few years. It has huge potential, but it is moving a little slow. After this few years still I don’t feel enough comfortable to make it the tool of my choice and be confident about it.

One thing which is super great for me about Concourse is: everything is a code, no UI clickable settings for pipeline. This is super advantage and what I want in CICD tool.

On the end I didn’t find CICD solution of my choice so far. Still looking the right one.


#12

Branches are by their nature an anti-pattern to CI, so I’m curious if you could expand on why you think it’s not good for CI? If you’re branching, you’re not integrating, so I would argue that concourse actually, by convention, enforces best practices around CI by not encouraging feature branching.


#13

@eedwards-sk that long-lived branches are against the definition of CI it is true.

But consider the following: if the only way I have to validate if my code is broken or not is to commit to master and wait for the reference pipeline to become red, well, I am going backwards I think.

Also having a generic pipeline dedicated to PRs is not good enough in my opinion, because again, to know if I broke the build, I have to create a PR.

So being able to CI feature branches is very important. Wether a branch is long-lived or short-lived is almost orthogonal to the CI system I would argue.


#14

I suppose I tend to side more with the spirit of this argument: http://www.davefarley.net/?p=247

if the only way I have to validate if my code is broken or not is to commit to master and wait for the reference pipeline to become red, well, I am going backwards I think.

I think that’s exactly how you go foreward!

I don’t like the idea of additional branches beyond your master. Your local machine may have your own branch, but that branch’s upstream should be master, and changes should go onto master “dark” with feature toggles. If you break master, well, that’s the point… you want to break master, because then you’re not doing break/fix twice (local first and then again with everyone else’s work), you’re doing it all at once. You also aren’t testing twice. etc. Your local builds and tests are meaningless until you merge. Your CI is the source of truth…

It’s your ci’s job to be really good at generating and testing artifacts, and the dev’s job to generate change and get fast feedback from the CI system. If it breaks, it’s everyone’s responsibility.

Either way, your points are valid. I believe strongly in modern principles so this is mostly a religious argument :slightly_smiling_face:

Not all teams may be ready to actually do CI, and I suspect most aren’t even when they think they are.