I. Installation of Jenkins
1. Installing Storage Server
Find a server to build an nfs server. See Ubuntu 16.04 Installation nfs > for details.
System: Ubuntu 16.04
IP: 172.18.1.13
apt install nfs-common nfs-kernel-server -y #Configuration mount information cat /etc/exports /data/k8s *(rw,sync,no_root_squash) #Add permissions to directories chmod -R 777 /data/k8s #start-up /etc/init.d/nfs-kernel-server start #Start-up systemctl enable nfs-kernel-server
2. kubernetes Cluster Installation of Jenkins
#Under this directory is the self-signed key for the test domain name of jenkins.mytest.io ls jenkins.mytest.io/ cacerts.pem cacerts.srl cakey.pem create_self-signed-cert.sh jenkins.mytest.io.crt jenkins.mytest.io.csr jenkins.mytest.io.key openssl.cnf tls.crt tls.key --- cd jenkins.mytest.io #Create the namespace where Jenkins lives kubectl create namespace kube-ops #Add ciphertext to kube-ops # Service certificate and private key ciphertext kubectl -n kube-ops create \ secret tls tls-jenkins-ingress \ --cert=./tls.crt \ --key=./tls.key # ca certificate ciphertext kubectl -n kube-ops create secret \ generic tls-ca \ --from-file=cacerts.pem
3. Create Jenkins
cat jenkins-pvc.yaml
--- apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-pv spec: capacity: storage: 20Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete nfs: server: 172.18.1.13 path: /data/k8s --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: jenkins-pvc namespace: kube-ops spec: accessModes: - ReadWriteMany resources: requests: storage: 20Gi
cat rbac.yaml
--- apiVersion: v1 kind: ServiceAccount metadata: name: jenkins namespace: kube-ops --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: jenkins rules: - apiGroups: ["extensions", "apps"] resources: ["deployments"] verbs: ["create", "delete", "get", "list", "watch", "patch", "update"] - apiGroups: [""] resources: ["services"] verbs: ["create", "delete", "get", "list", "watch", "patch", "update"] - apiGroups: [""] resources: ["pods"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/exec"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: jenkins namespace: kube-ops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: jenkins subjects: - kind: ServiceAccount name: jenkins namespace: kube-ops
cat jenkins.yaml
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: jenkins namespace: kube-ops spec: template: metadata: labels: app: jenkins spec: terminationGracePeriodSeconds: 10 serviceAccount: jenkins containers: - name: jenkins image: jenkins/jenkins:lts imagePullPolicy: IfNotPresent ports: - containerPort: 8080 name: web protocol: TCP - containerPort: 50000 name: agent protocol: TCP resources: limits: cpu: 1000m memory: 1Gi requests: cpu: 500m memory: 512Mi livenessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 readinessProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 volumeMounts: - name: jenkinshome subPath: jenkins mountPath: /var/jenkins_home env: - name: LIMITS_MEMORY valueFrom: resourceFieldRef: resource: limits.memory divisor: 1Mi - name: JAVA_OPTS value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -Duser.timezone=Asia/Shanghai securityContext: fsGroup: 1000 volumes: - name: jenkinshome persistentVolumeClaim: claimName: jenkins-pvc --- apiVersion: v1 kind: Service metadata: name: jenkins namespace: kube-ops labels: app: jenkins spec: selector: app: jenkins type: NodePort ports: - name: web port: 8080 targetPort: web nodePort: 30002 - name: agent port: 50000 targetPort: agent --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: jenkins-lb namespace: kube-ops spec: tls: - secretName: tls-jenkins-ingress rules: - host: jenkins.mytest.io http: paths: - backend: serviceName: jenkins servicePort: 8080
Create Jenkins
kubectl create -f pvc.yaml kubectl create -f rbac.yaml kubectl create -f jenkins.yaml
2. jenkins configuration
Configure domain name resolution in / etc/hosts
kube-ip jenkins.mytest.io
1. Initialization configuration
Open https://jenkins.mytes.io
Install the plug-in and choose the default.
2. Plug-in Configuration
Using the kubernetes plug-in in Jenkins, Jenkins can call kubernetes to generate Jenkins-slave
https://github.com/jenkinsci/kubernetes-plugin
2.1 Install the kubernetes plug-in
Manage Jenkins - > Manage Plugins - > Available - > Kubernetes plugin check installation.
2.2 Configuration of kubernetes Plug-in Function
-
Manage Jenkins - > Configure System - > (drag to the bottom) Add a new cloud - > Select Kubernetes, and then fill in the Kubernetes and Jenkins configuration information.
-
The kubernetes address uses the kube server discovery https://kubernetes.default.svc.cluster.local
-
namespace fills in kube-ops, and then clicks Test Connection. If there is a prompt for Connection test success, Jenkins can communicate with the Kubernetes system properly.
- Jenkins URL address: http://jenkins.kube-ops.svc.cluster.local:8080
Also note that if the Test Connection fails here, it may be a permission issue. Here we need to add the secret s corresponding to the service Account of the jenkins we created to the Credentials here.
2.3 Configuration of kubernetes Pod Template
In fact, it is to configure the Pod template that Jenkins Slave runs. Namespaces are also kube-ops. Labels is also very important here. We need to use this value when we execute Job later. Then we use the image cnych/jenkins:jnlp, which is based on the official JNLP image. In addition, some practical tools such as kubectl are added.
2.4 Mount Volume for Adding Containers
Also note that we need to mount two host directories below, one is / var/run/docker.sock. This file is used for the container in Pod to share the Docker of the host machine. This is what we call docker in docker. Docker binary files have been packaged in the mirror above, and In a directory, / root/.kube directory, we mount this directory under the container's / home/jenkins/.kube directory in order to enable us to use the kubectl tool to access our Kubernetes cluster in the container of Pod, so that we can deploy Kubernetes applications in Slave Pod later.
2.5 Add Account
Others had permission problems when running Slave Pod after configuration. Because Jenkins Slave Pod has no permission to configure, they need to configure ServiceAccount. Click on the following advanced level at the location of Slave Pod configuration and add the corresponding ServiceAccount.
Not adding an account during the test will inform you that there are no privileges
Add the jenkins account created in the kubernetes cluster to the Container Template Advancement
III. Testing
Create a test task
Add something to the pipeline box
def label = "jnlp-slave" podTemplate(inheritFrom: 'jnlp-slave', instanceCap: 0, label: 'jnlp-slave', name: '', namespace: 'kube-ops', nodeSelector: '', podRetention: always(), serviceAccount: '', workspaceVolume: emptyDirWorkspaceVolume(false), yaml: '') { node(label) { container('jnlp-slave'){ stage('Run shell') { sh 'docker info' sh 'kubectl get pods -n kube-ops' } } } }
Start building tasks
Building task output
Started by user admin Running in Durability level: MAX_SURVIVABILITY [Pipeline] Start of Pipeline [Pipeline] podTemplate [Pipeline] { [Pipeline] node Still waiting to schedule task 'Jenkins' doesn't have label 'jnlp-slave' Agent jnlp-slave-tbdnl is provisioned from template Kubernetes Pod Template Agent specification [Kubernetes Pod Template] (jnlp-slave): * [jnlp-slave] cnych/jenkins:jnlp Running on jnlp-slave-tbdnl in /home/jenkins/workspace/test-jnlp-slave [Pipeline] { [Pipeline] container [Pipeline] { [Pipeline] stage [Pipeline] { (Run shell) [Pipeline] sh + docker info Containers: 15 Running: 12 Paused: 0 Stopped: 3 Images: 12 Server Version: 18.09.6 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: false Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84 runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30 init version: fec3683 Security Options: apparmor seccomp Profile: default Kernel Version: 4.15.0-1049-azure Operating System: Ubuntu 16.04.6 LTS OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 7.768GiB Name: test-kube-node-04 ID: YFTJ:FVHK:TAF3:HTAJ:HJ2A:5SFW:73RW:VQY5:Y64U:UGIR:KMJ2:XPRL Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Registry Mirrors: https://kv3qfp85.mirror.aliyuncs.com/ Live Restore Enabled: false WARNING: No swap limit support [Pipeline] sh + kubectl get pods -n kube-ops NAME READY STATUS RESTARTS AGE jenkins-6b874b8d7-q28h4 1/1 Running 0 3h jnlp-slave-tbdnl 2/2 Running 0 15s [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // container [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // podTemplate [Pipeline] End of Pipeline Finished: SUCCESS
Concluding remarks
Most of the above steps are based on the idea of "Jenkins-based CI/CD(1)" on Yangming's blog, but due to environmental and cognitive problems, there will be various errors, hold back for a day, and no progress.
Later, it could only be found by the command of kubectl-n kube-ops logs-f jenkins-xxxxx bit by bit. Many posts were searched in the same way, but they couldn't solve the essential problem. If they couldn't run, it would be useless to run high-end. Later, according to github of kubernetes-plugin, they read and read the mistakes in combination with their own reports. After a little adjustment, we finally got it out.
Specific reference to the following several excellent articles:
I/CD (1) Based on Jenkins (Teacher Yang Ming's article is great)
Building CI/CD Environment by kubernetes Jenkins gitlab (II)
GitHub-Jenkins-kubernetes-plugin: jenkinsci/kubernetes-plugin
rancher official website: deployment and extension of Jenkins on Kubernetes