If you have attempted to deploy Jenkins into k8s you have likely come across the following issue when using the official Jenkins image on docker hub.
Your persistent volume storage cannot be written to by the Jenkins container.
kubectl describe pod
The reason for this error is because the official Jenkins image runs as the user ‘jenkins’, see excerpt from the official Dockerfile below.
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
...
...
USER ${user}
While this is easily handled in a normal docker setup with docker run options but when a pod can live on any node you cannot easily configure permissions on random nodes all the time and it really isn’t the most automated solution. So after some research I found that there is some security context settings you can set in your configuration to control the volume security (as well as container level and pod level security).
See: http://kubernetes.io/docs/user-guide/security-context/
Working K8S configuration for Jenkins
So for the TLDR; to just get up and going is use this file to get up and running, make sure you setup your own volume and update the volumeID value.
apiVersion: v1
kind: Service
metadata:
name: jenkins-service
spec:
ports:
- port: 80
name: http
targetPort: 8080
- port: 50000
name: jenkins-master
selector:
name: jenkins-app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-app
spec:
template:
metadata:
labels:
name: jenkins-app
spec:
securityContext:
fsGroup: 1000
containers:
- name: jenkins-app
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
volumeMounts:
- name: jenkins-vol
mountPath: /var/jenkins_home
volumes:
- name: jenkins-vol
awsElasticBlockStore:
volumeID: vol-123456
fsType: ext4
Additional content from archive:
If you have attempted to deploy Jenkins into k8s you have likely come across the following issue when using the official Jenkins image on docker hub. Your persistent volume storage cannot be written to by the Jenkins container.
kubectl describe pod [pod]
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned jenkins-app-2964352851-9px36 to ip-172-20-0-177.ec2.internal
27s 27s 1 {kubelet ip-172-20-0-177.ec2.internal} spec.containers{jenkins-app} Normal Created Created container with docker id 58646e30400e
27s 27s 1 {kubelet ip-172-20-0-177.ec2.internal} spec.containers{jenkins-app} Normal Started Started container with docker id 58646e30400e
27s 27s 1 {kubelet ip-172-20-0-177.ec2.internal} spec.containers{jenkins-app} Normal Created Created container with docker id b6bc2e6956dd
27s 27s 1 {kubelet ip-172-20-0-177.ec2.internal} spec.containers{jenkins-app} Normal Started Started container with docker id b6bc2e6956dd
26s 25s 2 {kubelet ip-172-20-0-177.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "jenkins-app" with CrashLoopBackOff: "Back-off 10s restarting failed container=jenkins-app pod=jenkins-app-2964352851-9px36_default(cbf2251d-5800-11e6-a96e-0ab4a66cdb89)"
28s 11s 3 {kubelet ip-172-20-0-177.ec2.internal} spec.containers{jenkins-app} Normal Pulling pulling image "jenkins"
27s 11s 3 {kubelet ip-172-20-0-177.ec2.internal} spec.containers{jenkins-app} Normal Pulled Successfully pulled image "jenkins"
11s 11s 1 {kubelet ip-172-20-0-177.ec2.internal} spec.containers{jenkins-app} Normal Created Created container with docker id 8ef01c412806
11s 11s 1 {kubelet ip-172-20-0-177.ec2.internal} spec.containers{jenkins-app} Normal Started Started container with docker id 8ef01c412806
26s 10s 3 {kubelet ip-172-20-0-177.ec2.internal} spec.containers{jenkins-app} Warning BackOff Back-off restarting failed docker container
10s 10s 1 {kubelet ip-172-20-0-177.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "jenkins-app" with CrashLoopBackOff: "Back-off 20s restarting failed container=jenkins-app pod=jenkins-app-2964352851-9px36_default(cbf2251d-5800-11e6-a96e-0ab4a66cdb89)"
kubectl logs [pod]
touch: cannot touch ‘/var/jenkins_home/copy_reference_file.log’: Permission denied
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
This is because the official Jenkins image runs as the user ‘jenkins’:
Excerpt from Jenkins Dockerfile
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
...
...
USER ${user}
While this is easily handled in a normal docker setup with docker run options but when a pod can live on any node you cannot easily configure permissions on random nodes all the time and it really isn’t the most automated solution. So after some research I found that there is some security context settings you can set in your configuration to control the volume security (as well as container level and pod level security).
See: http://kubernetes.io/docs/user-guide/security-context/
Setting the Security Context
As Jenkins runs with uid 1000 all you need to do is set the fsGroup context in your spec to 1000 and volume access will be a breeze.
spec:
securityContext:
fsGroup: 1000
Working Configuration for Jenkins
So the TLDR; to just get up and going is use this file to get up and running, make sure you setup your own volume and update the volumeID value.
apiVersion: v1
kind: Service
metadata:
name: jenkins-service
spec:
ports:
- port: 80
name: http
targetPort: 8080
- port: 50000
name: jenkins-master
selector:
name: jenkins-app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-app
spec:
template:
metadata:
labels:
name: jenkins-app
spec:
securityContext:
fsGroup: 1000
containers:
- name: jenkins-app
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
volumeMounts:
- name: jenkins-vol
mountPath: /var/jenkins_home
volumes:
- name: jenkins-vol
awsElasticBlockStore:
volumeID: vol-123456
fsType: ext4
Tell your friends...