- Tips: after the following command is installed, you will be familiar with it by trying it several times
- kind installation
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.9.0/kind-linux-amd64 chmod +x ./kind mv ./kind /${some-dir-in-your-PATH}/kind
- kubectl installation
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl mkdir -p ~/.local/bin/kubectl mv ./kubectl ~/.local/bin/kubectl
- Create a default cluster:
kind create cluster
- Create from mirror:
kind create cluster --image kindest/node:latest
- View cluster
kind get cluster
- Get node
kind get nodes
- Delete default cluster
kind delete cluster
- Delete cluster by name
kind delete cluster --name clusterName
- Delete all clusters
kind delete clusters --all
-
Setting the context for kubectl means viewing the available clusters, which is equivalent to the cluster list
kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kind-my-cluster kind-my-cluster kind-my-cluster
- Switch cluster
# After viewing the cluster list, switch the context as needed kubectl config set-context clusterName perhaps kubectl cluster-info --context clusterName
- Load the image into kind's node, which is mainly used where there is no network
kind load docker-image nginx --name kind
- Configuring a multi node cluster: kind_cluster.yaml
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: my-cluster # 1 control plane node and 3 workers nodes: # the control plane node config - role: control-plane # the three workers - role: worker - role: worker - role: worker
Create: kind create cluster --config=kind-config.yaml , You can add to the command you create -- Name my cluster, but try to write it into yaml to facilitate later use
View node
kubectl get nodes #NAME STATUS ROLES AGE VERSION #my-cluster-control-plane Ready master 48m v1.19.1 #my-cluster-worker Ready <none> 47m v1.19.1 #my-cluster-worker2 Ready <none> 47m v1.19.1 #my-cluster-worker3 Ready <none> 47m v1.19.1
Multiple control surfaces
Generally, a kubernetes used in production will use multiple control planes to ensure high availability. Using kind config can easily create a kubernetes cluster with multiple control planes. Use the following command to create a cluster of 3 control surfaces and 3 work nodes:
# this config file contains all config fields with comments # NOTE: this is not a particularly useful config file kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: # the control plane node config - role: control-plane - role: control-plane - role: control-plane # the three workers - role: worker - role: worker - role: worker
You can see three control surfaces:
# kubectl get node NAME STATUS ROLES AGE VERSION kind-control-plane Ready master 15m v1.19.1 kind-control-plane2 Ready master 14m v1.19.1 kind-control-plane3 Ready master 13m v1.19.1 kind-worker Ready <none> 12m v1.19.1 kind-worker2 Ready <none> 12m v1.19.1 kind-worker3 Ready <none> 12m v1.19.1
Specify the version of Kubernetes
You can modify the version of kubernetes by specifying the mirror version of node. Can be in On the official release page If you need to find a mirror tag in, it is recommended to bring sha with the tag, such as
kindest/node:v1.19.1@sha256:98cf5288864662e37115e362b23e4369c8c4a408f99cbc06e58ac30ddc721600
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane image: kindest/node:v1.16.4@sha256:b91a2c2317a000f3a783489dfb755064177dbc3a0b2f4147d50f04825d016f55 - role: worker image: kindest/node:v1.16.4@sha256:b91a2c2317a000f3a783489dfb755064177dbc3a0b2f4147d50f04825d016f55
Map the port of node to the host
You can map the port of node to the host and port 80 of container to port 80 of host in the following way:
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: - containerPort: 80 hostPort: 80 listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0" protocol: udp # Optional, defaults to tcp
One disadvantage of kind's update to the cluster (such as enabling IPv6, configuring nodeport, etc.) is that the configuration can only be "updated" by re creating the cluster. At present, the official does not support the update operation of the control surface. Please refer to this issue . See for more configurations Official documents.
ingress deployment
You can forward traffic from the host to the node's ingress controller through the extraport mapping configuration option of KIND.
You can set custom node labels through kubedm's InitConfiguration to use for the nodeSelector of the ingress controller.
Create cluster
Using extraPortMappings Create a cluster with node labels.
cat <<EOF | kind create cluster --config=- kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane kubeadmConfigPatches: - | kind: InitConfiguration nodeRegistration: kubeletExtraArgs: node-labels: "ingress-ready=true" extraPortMappings: - containerPort: 80 hostPort: 80 protocol: TCP - containerPort: 443 hostPort: 443 protocol: TCP EOF
Deploy ingress controller
The inress controllers supported by kind are as follows:
Next, deploy NGINX ingress.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
During the deployment of ingree, you may encounter that the secret cannot be found Ingress nginx admission. The reason for this problem may be that the following two job s cannot be started normally. See this issue . If the external network image cannot be pulled, you can first download the deploy.yaml file to the local host, manually load the image to the local host, and then use the kind load docker image command mentioned above to load the image to the node.
Test ingress
Create as follows resources: kubectl apply -f usage.yaml
kind: Pod apiVersion: v1 metadata: name: foo-app labels: app: foo spec: containers: - name: foo-app image: hashicorp/http-echo:0.2.3 args: - "-text=foo" --- kind: Service apiVersion: v1 metadata: name: foo-service spec: selector: app: foo ports: # Default port used by the image - port: 5678 --- kind: Pod apiVersion: v1 metadata: name: bar-app labels: app: bar spec: containers: - name: bar-app image: hashicorp/http-echo:0.2.3 args: - "-text=bar" --- kind: Service apiVersion: v1 metadata: name: bar-service spec: selector: app: bar ports: # Default port used by the image - port: 5678 --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: example-ingress spec: rules: - http: paths: - path: /foo backend: serviceName: foo-service servicePort: 5678 - path: /bar backend: serviceName: bar-service servicePort: 5678 ---
At the remote curl, the foo and bar services on the host can see that the network is connected. At this time, the nodeport 80 exposed by ingress through kind configuration extraPortMappings is used.
C:\Users\liuch>curl 192.168.100.11/foo foo C:\Users\liuch>curl 192.168.100.11/bar bar
Summary:
Kind is a very convenient kubernetes deployment tool, which can quickly deploy multiple kubernetes clusters. However, there are some implementation flaws. For example, kind does not support cluster upgrading, and the process of manually loading images is also troublesome. However, in terms of overall use, the flaws are not hidden.
FAQ:
-
The connection to the server localhost: 8080 was rejected - did you specify the right host or port?, Moreover, the cluster cannot be successfully switched by using commands such as kubectl config use context kind kind.
You can view the supported context names in the / root/.kube/config file (for example, the context used below is kind kind), and then use kubectl config use context kind kind:
apiVersion: v1 clusters: - cluster: certificate-authority-data: ... server: https://127.0.0.1:39923 name: kind-kind contexts: - context: cluster: kind-kind user: kind-kind name: kind-kind current-context: kind-kind kind: Config preferences: {} users: - name: kind-kind user: client-certificate-data: ...
-
The default address of apiservice created by kind is 127.0.0.1 and cannot be connected remotely. You can use the following mode , modify the address and port of apiservice
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 networking: apiServerPort: 6000
-
If kind fails to connect, you can check whether there are other docker network impacts