Multiple Cluster Deployment Management with Istio: Single Control Plane Gateway Connection Topology

Posted by robert.access on Sat, 06 Jun 2020 05:49:32 +0200

In a single control plane topology, multiple Kubernetes clusters work together to use a single Istio control plane running on one of the clusters.The control plane's ilot manages services on local and remote clusters and configures the Envoy Sidecar proxy for all clusters.

Cluster-aware service routing

Cluster-aware service routing capabilities were introduced in Istio 1.1. With a single control plane topology configuration, service requests can be routed to other clusters through their entry gateways using Istio's plit-horizon EDS (Horizontal Split Endpoint Discovery Service) functionality.Istio is able to route requests to different endpoints based on the location of the request source.

In this configuration, requests from a Sidecar proxy in one cluster to a service in the same cluster are still forwarded to the local service IP.If the target workload is running in another cluster, the gateway IP of the remote cluster is used to connect to the service.

As shown, the primary cluster Cluster 1 runs a full set of Istio control plane components, while Cluster 2 runs only Istio Citadel, Sidecar Injector, and Ingress gateways.No VPN connection is required and no direct network access is required between workloads in different clusters.

Intermediate CA certificates are generated for each cluster itadel from the shared root CA, which enables two-way TLS communication across different clusters.For illustration purposes, we used the sample root CA certificate provided in the Istio installation in the samples/certs directory for both clusters.In a real deployment, you may use different CA certificates for each cluster, all of which are signed by a common root CA.

In each Kubernetes cluster, including the clusters cluster1 and cluster2 in the example, the following commands are used to create the Kubernetes key for the generated CA certificate:

kubectl create namespace istio-system
kubectl create secret generic cacerts -n istio-system \
  --from-file=samples/certs/ca-cert.pem \
  --from-file=samples/certs/ca-key.pem \
  --from-file=samples/certs/root-cert.pem \
  --from-file=samples/certs/cert-chain.pem

Istio Control Plane Component

In Cluster Cluster 1, which deploys a full set of Istio control plane components, follow these steps:

1. Install Istio's CRD s and wait a few seconds to submit them to the Kubernetes API server, as follows:

for
i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i;
done

2. Start deploying the Istio control plane in cluster cluster 1.

If helm dependencies are missing or not up to date, they can be updated through helm dep update.It is important to note that because istio-cni is not used, it can be temporarily removed from dependenciesRequirements.yamlRemove in and perform the update operation.The specific commands are as follows:

helm
template --name=istio --namespace=istio-system \
--set
global.mtls.enabled=true \
--set
security.selfSigned=false \
--set
global.controlPlaneSecurityEnabled=true \
--set
global.meshExpansion.enabled=true \
--set
global.meshNetworks.network2.endpoints[0].fromRegistry=n2-k8s-config \
--set
global.meshNetworks.network2.gateways[0].address=0.0.0.0 \
--set
global.meshNetworks.network2.gateways[0].port=15443 \
install/kubernetes/helm/istio
> ./istio-auth.yaml

Note that the gateway address is set to 0.0.0.0.This is a temporary placeholder value that will be updated to the public IP value of its gateway after cluster 2 is deployed.

Deploy Istio to cluster 1 as follows:

kubectl apply -f ./istio-auth.yaml

Ensure that the above steps are successfully performed in the Kubernetes cluster.

  1. Create a gateway to access remote services as follows:
kubectl
create -f - <<EOF
apiVersion:
networking.istio.io/v1alpha3
kind:
Gateway
metadata:
  name: cluster-aware-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 15443
      name: tls
      protocol: TLS
    tls:
      mode: AUTO_PASSTHROUGH
    hosts:
    - "*"
EOF

The above gateway is configured with a dedicated port 15443 to deliver incoming traffic to the target service specified in the requested SNI header, using a two-way TLS connection from the source service to the target service.

Note that although this gateway definition applies to cluster cluster 1, since both clusters communicate with the same Pilot s, this gateway instance also applies to cluster 2.

istio-remote component

Deploy the istio-remote component in another cluster, Cluster 2, as follows:

1. First get the entry gateway address for cluster cluster 1 as follows:

export LOCAL_GW_ADDR=$(kubectl get svc --selector=app=istio-ingressgateway \
  -n istio-system -o
jsonpath="{.items[0].status.loadBalancer.ingress[0].ip}")

Use Helm to create an Istio remote deployment YAML file by executing the following command:

helm
template --name istio-remote --namespace=istio-system \
--values
install/kubernetes/helm/istio/values-istio-remote.yaml \
--set
global.mtls.enabled=true \
--set
gateways.enabled=true \
--set
security.selfSigned=false \
--set
global.controlPlaneSecurityEnabled=true \
--set
global.createRemoteSvcEndpoints=true \
--set
global.remotePilotCreateSvcEndpoint=true \
--set
global.remotePilotAddress=${LOCAL_GW_ADDR} \
--set
global.remotePolicyAddress=${LOCAL_GW_ADDR} \
--set
global.remoteTelemetryAddress=${LOCAL_GW_ADDR} \
--set
gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network2" \
--set
global.network="network2" \
install/kubernetes/helm/istio
> istio-remote-auth.yaml

2. Deploy the Istio remote component to cluster 2 as follows:

kubectl apply -f ./istio-remote-auth.yaml

Ensure that the above steps are successfully performed in the Kubernetes cluster.

3. Update the configuration item istio for cluster cluster 1 to obtain the entry gateway address for cluster 2 as follows:

export
REMOTE_GW_ADDR=$(kubectl get --context=$CTX_REMOTE svc --selector=app=
istio-ingressgateway
-n istio-system -o jsonpath="{.items[0].status.loadBalancer.ingress
[0].ip}")

Edit the configuration item istio under namespace istio-system in cluster 1, replace the gateway address of network2, change from 0.0.0.0 to the gateway address of cluster 2 ${REMOTE_GW_ADDR}.After saving, Pilot will automatically read the updated network configuration.

4. Create Kubeconfig for Cluster Cluster 2.Create a Kubeconfig for the service account istio-multi on cluster 2 and save it as a file n2-k8s-config with the following commands:

CLUSTER_NAME="cluster2"
SERVER=$(kubectl
config view --minify=true -o "jsonpath={.clusters[].cluster.server}")
SECRET_NAME=$(kubectl
get sa istio-multi -n istio-system -o jsonpath='{.secrets[].name}')
CA_DATA=$(kubectl
get secret ${SECRET_NAME} -n istio-system -o
"jsonpath={.data['ca\.crt']}")
TOKEN=$(kubectl
get secret ${SECRET_NAME} -n istio-system -o
"jsonpath={.data['token']}" | base64 --decode)
cat
<<EOF > n2-k8s-config
apiVersion:
v1
kind:
Config
clusters:
  - cluster:
      certificate-authority-data: ${CA_DATA}
      server: ${SERVER}
    name: ${CLUSTER_NAME}
contexts:
  - context:
      cluster: ${CLUSTER_NAME}
      user: ${CLUSTER_NAME}
    name: ${CLUSTER_NAME}
current-context:
${CLUSTER_NAME}
users:
  - name: ${CLUSTER_NAME}
    user:
      token: ${TOKEN}
EOF

5. Add cluster 2 to the Istio control plane.

After the following commands are executed in the cluster cluster cluster, adding the kubeconfig of cluster 2 generated above to the secret s of cluster 1, Istio Pilot in cluster 1 will start listening for the services and instances of cluster 2, just as it does for the services and instances in cluster 1:

kubectl create secret generic n2-k8s-secret --from-file n2-k8s-config -n istio-system
kubectl label secret n2-k8s-secret istio/multiCluster=true -n istio-system

Deployment Sample Application

To demonstrate cross-cluster access, deploy the sleep application service and version v1 helloworld service in cluster 1 of the first Kubernetes cluster, deploy version v2 helloworld service in cluster 2 of the second cluster, and then verify that the sleep application can invoke the helloworld service of a local or remote cluster.

1. Deploy the helloworld service from sleep and version v 1 to cluster 1 by executing the following commands:

kubectl
create namespace app1
kubectl
label namespace app1 istio-injection=enabled
kubectl
apply -n app1 -f samples/sleep/sleep.yaml
kubectl
apply -n app1 -f samples/helloworld/service.yaml
kubectl
apply -n app1 -f samples/helloworld/helloworld.yaml -l version=v1
export
SLEEP_POD=$(kubectl get -n app1 pod -l app=sleep -o
jsonpath={.items..metadata.name})

2. Deploy the helloworld service from version V 2 to cluster 2 in the second cluster by executing the following commands:

kubectl
create namespace app1
kubectl
label namespace app1 istio-injection=enabled
kubectl
apply -n app1 -f samples/helloworld/service.yaml
kubectl
apply -n app1 -f samples/helloworld/helloworld.yaml -l version=v2

3. Log in to the istio-pilot container under the namespace istio-system and run curlLocalhost:8080/v1/registrationThe | grep helloworld -A 11-B 2 command, if you get similar results as the following, indicates that both version V1 and V 2 helloworld services are registered in the Istio control plane:

4. Verify that the sleep service in cluster 1 can properly invoke the helloworld service in a local or remote cluster, and execute the following commands under cluster 1:

kubectl exec -it -n app1 $SLEEP_POD sh

Log in to the container and run curlHelloworld.app1: 5000/hello.

If set correctly, you can see two versions of the helloworld service in the call results returned, and you can verify the accessed endpoint IP address by looking at the istio-proxy container log in the sleep container group, as follows:

Text Link
This is an original content of Cloud-dwelling Community, which cannot be reproduced without permission.

Topics: Kubernetes network VPN