No Workers on Kubernetes


#1

Hey guys,

I am trying to deploy the concourse helm chart stable/concourse on kubernetes. Image version 4.2.2.
My postgres instance is a managed service outside of the cluster. The web and worker are both up and running with logLevel: debug and i can connect to web trough and deploy a pipeline.
However, when i try to run the pipline i get the error message “No Workers”… and thats all. Neither the web or worker print out any error messages.

Additional Information:

  • I am running inside the IbmCloud.
  • Inside Values i changed ClusterIp to Nodeport
  • My cluster does not have a external ip adress (yet?)
  • When i try to deploy it plain without any configuration web cant find postgres
  • Plain deployment on minikube is working perfectly fine. Aswell as building pipelines

I have no idea what to do now. Pls Help :frowning:


#2

I have the same problem with docker-for-desktop kubernetes deployment with helm.

I can create pipelines, everything seems to work fine. Under the hood though, the workers fail to start with the following error:
{“timestamp”:“1550586577.247741938”,“source”:“worker”,“message”:“worker.setup.unpacking”,“log_level”:1,“data”:{“session”:“1”}}
{“timestamp”:“1550586586.582898617”,“source”:“worker”,“message”:“worker.setup.done”,“log_level”:1,“data”:{“session”:“1”}}
{“timestamp”:“1550586586.597593546”,“source”:“worker”,“message”:“worker.garden.extract-resources.extract.extracting”,“log_level”:1,“data”:{“resource-type”:“bosh-io-release”,“session”:“2.1.1”}}
exit status 2
{“timestamp”:“1550586602.246784449”,“source”:“worker”,“message”:“worker.garden.extract-resources.extract.failed-to-extract-resource”,“log_level”:2,“data”:{“error”:“exit status 2”,“output”:"/concourse-work-dir/4.2.2/assets/bin/tar: etc/ssl/certs/02265526.0: Cannot utime: No such file or directory\n/concourse-work-dir/4.2.2/assets/bin/tar: etc/ssl/certs/03179a64.0: Cannot utime

And running any simple task obviously returns with “No Workers”
Is that something you recognise?


#3

In the Docker for Desktop Kubernetes cluster make sure that worker persistence is turned off (I don’t think that’s a massive issue given the use-cases), eg:

persistence:
## Enable persistence using Persistent Volume Claims.
##
enabled: false

I had exactly the same issues until doing that - it looked fine actually execing into the containers, and the PVs were there - but there was clearly a compatibility issue.


#4

Awesome! Thank you @jprelph!


#5

@jprelph
Altough i am not using Desktop Kubernetes this solved my issue aswell.
I am currently und IBM Cloud.

What are the persistent volume claims actually used for? Could a be a issue if i do not use them?

Anyway thank you very very much. You saved me!


#6

So if you don’t use persistence the images the worker uses will be stored in an emptyDir, and eventually fill up the node. This is fine for testing where you don’t expect it to be a long-lived worker anyway, but you don’t want to do that in a real system.