Istio police agent & envoy start process

Posted by virtual_odin on Wed, 27 May 2020 13:02:34 +0200

Opening

Through the previous article Istio Sidecar injection principle It can be found that the Sidecar application has been injected at the same time when the application is submitted to the kubernate deployment.

If you are careful, you can also find that in addition to the application of istio proxy, there is also an Init Containers of istio init. Let's take a look at what we have done in the two injected containers.

istio-init

istio-init init container Used to set iptables rules to pass inbound / outbound traffic through the sidecar agent. Initialization containers differ from application containers in the following ways:

  • It runs before the application container is started, and it runs until it is finished.
  • If there are multiple initialization containers, each container should complete successfully before starting the next container

We can see the pod corresponding to sleep

kubectl describe pod sleep-54f94cbff5-jmwtf
Name:         sleep-54f94cbff5-jmwtf
Namespace:    default
Priority:     0
Node:         minikube/172.17.0.3
Start Time:   Wed, 27 May 2020 12:14:08 +0800
Labels:       app=sleep
              istio.io/rev=
              pod-template-hash=54f94cbff5
              security.istio.io/tlsMode=istio
Annotations:  sidecar.istio.io/interceptionMode: REDIRECT
              sidecar.istio.io/status:
                {"version":"d36ff46d2def0caba37f639f09514b17c4e80078f749a46aae84439790d2b560","initContainers":["istio-init"],"containers":["istio-proxy"]...
              traffic.sidecar.istio.io/excludeInboundPorts: 15020
              traffic.sidecar.istio.io/includeOutboundIPRanges: *
Status:       Running
IP:           172.18.0.11
IPs:
  IP:           172.18.0.11
Controlled By:  ReplicaSet/sleep-54f94cbff5
Init Containers:
  istio-init:
    Container ID:  docker://f5c88555b666c18e5aa343b3f452355f96d66dc4268fa306f93432e0f98c3950
    Image:         docker.io/istio/proxyv2:1.6.0
    Image ID:      docker-pullable://istio/proxyv2@sha256:821cc14ad9a29a2cafb9e351d42096455c868f3e628376f1d0e1763c3ce72ca6
    Port:          <none>
    Host Port:     <none>
    Args:
      istio-iptables
      -p
      15001
      -z
      15006
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x
      
      -b
      *
      -d
      15090,15021,15020
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 27 May 2020 12:14:12 +0800
      Finished:     Wed, 27 May 2020 12:14:13 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     10m
      memory:  10Mi
    Environment:
      DNS_AGENT:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from sleep-token-zq2wv (ro)
Containers:
  sleep:
    Container ID:  docker://a5437e12f6ea25d828531ba0dc4fab78374d5e9f746b6a199c4ed03b5d53c8f7
    Image:         governmentpaas/curl-ssl
    Image ID:      docker-pullable://governmentpaas/curl-ssl@sha256:b8d0e024380e2a02b557601e370be6ceb8b56b64e80c3ce1c2bcbd24f5469a23
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sleep
      3650d
    State:          Running
      Started:      Wed, 27 May 2020 12:14:14 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/sleep/tls from secret-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from sleep-token-zq2wv (ro)
  istio-proxy:
    Container ID:  docker://d03a43d3f257c057b664cf7ab03bcd301799a9e849da35fe54fdb0c9ea5516a4
    Image:         docker.io/istio/proxyv2:1.6.0
    Image ID:      docker-pullable://istio/proxyv2@sha256:821cc14ad9a29a2cafb9e351d42096455c868f3e628376f1d0e1763c3ce72ca6
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --serviceCluster
      sleep.$(POD_NAMESPACE)
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --trust-domain=cluster.local
      --concurrency
      2
    State:          Running
      Started:      Wed, 27 May 2020 12:14:17 +0800
    Ready:          True
    Restart Count:  0

As you can see from the output, the State of the istio init container is Terminated, and the Reason is Completed. Only two containers are running, the main application curl SSL container and the istio proxyv2 container.

Let's format the Args parameter corresponding to istio init and find that it executes the following command

istio-iptables -p 15001 -z 15006 -u 1337 -m REDIRECT -i * -x  -b * -d 15090,15021,15020

You can see that the entry of the istio init container is the istio iptables command line, which is a binary file compiled by go. The binary file calls the iptables command to create some columns of iptables rules to hijack the traffic in the Pod. The command line tool source code entry is in tools / istio iptables/ main.go Medium. Next let's see what iptables rules it operates on.

This article runs on minikube, because the istio init container will exit after initialization, so there is no way to log in the container directly. However, the iptables rules it applies will be seen on other containers in the same Pod. We can log in to other containers in the Pod to view the corresponding rules. The execution command is as follows:

Enter minikube and switch to root

minikube ssh
sudo -i

View containers related to sleep application

docker ps | grep sleep

d03a43d3f257        istio/proxyv2              "/usr/local/bin/pilo..."   2 hours ago         Up 2 hours                              k8s_istio-proxy_slee-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0
a5437e12f6ea        8c797666f87b               "/bin/sleep 3650d"       2 hours ago         Up 2 hours                              k8s_sleep_sleep-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0
efdbb69b77c0        k8s.gcr.io/pause:3.2       "/pause"                 2 hours ago         Up 2 hours                              k8s_POD_sleep-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0

Select one of the above containers and check its process ID, where 8533 is its process ID. In this case, if you directly enter the docker container to execute ssh, you can't get its iptables rule, because its permission is insufficient.

iptables -t nat -L -v

iptables v1.6.1: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.

You need to view the corresponding rules through nsenter authorization, nsenter command details.

docker inspect efdbb69b77c0 --format '{{ .State.Pid }}'
8533

nsenter -t 8533 -n iptables -t nat -S

-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N ISTIO_INBOUND
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-N ISTIO_REDIRECT
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp -m tcp --dport 22 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A ISTIO_OUTPUT -s 127.0.0.6/32 -o lo -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001

View details of rule configuration in NAT table

nsenter -t 8533 -n iptables -t nat -L -v
Chain PREROUTING (policy ACCEPT 3435 packets, 206K bytes)
 pkts bytes target     prot opt in     out     source               destination         
 3435  206K ISTIO_INBOUND  tcp  --  any    any     anywhere             anywhere            

Chain INPUT (policy ACCEPT 3435 packets, 206K bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 599 packets, 54757 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   22  1320 ISTIO_OUTPUT  tcp  --  any    any     anywhere             anywhere            

Chain POSTROUTING (policy ACCEPT 599 packets, 54757 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain ISTIO_INBOUND (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     tcp  --  any    any     anywhere             anywhere             tcp dpt:22
    1    60 RETURN     tcp  --  any    any     anywhere             anywhere             tcp dpt:15090
 3434  206K RETURN     tcp  --  any    any     anywhere             anywhere             tcp dpt:15021
    0     0 RETURN     tcp  --  any    any     anywhere             anywhere             tcp dpt:15020
    0     0 ISTIO_IN_REDIRECT  tcp  --  any    any     anywhere             anywhere            

Chain ISTIO_IN_REDIRECT (3 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 REDIRECT   tcp  --  any    any     anywhere             anywhere             redir ports 15006

Chain ISTIO_OUTPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  any    lo      127.0.0.6            anywhere            
    0     0 ISTIO_IN_REDIRECT  all  --  any    lo      anywhere            !localhost            owner UID match 1337
    0     0 RETURN     all  --  any    lo      anywhere             anywhere             ! owner UID match 1337
   22  1320 RETURN     all  --  any    any     anywhere             anywhere             owner UID match 1337
    0     0 ISTIO_IN_REDIRECT  all  --  any    lo      anywhere            !localhost            owner GID match 1337
    0     0 RETURN     all  --  any    lo      anywhere             anywhere             ! owner GID match 1337
    0     0 RETURN     all  --  any    any     anywhere             anywhere             owner GID match 1337
    0     0 RETURN     all  --  any    any     anywhere             localhost           
    0     0 ISTIO_REDIRECT  all  --  any    any     anywhere             anywhere            

Chain ISTIO_REDIRECT (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 REDIRECT   tcp  --  any    any     anywhere             anywhere             redir ports 15001

For iptables rules, please refer to iptables command

Look back at the corresponding go source code

tools/istio-iptables/pkg/constants/constants.go

// Constants for iptables commands
const (
	IPTABLES         = "iptables"
	IPTABLESRESTORE  = "iptables-restore"
	IPTABLESSAVE     = "iptables-save"
	IP6TABLES        = "ip6tables"
	IP6TABLESRESTORE = "ip6tables-restore"
	IP6TABLESSAVE    = "ip6tables-save"
	IP               = "ip"
)

// iptables tables
const (
	MANGLE = "mangle"
	NAT    = "nat"
	FILTER = "filter"
)

// Built-in iptables chains
const (
	INPUT       = "INPUT"
	OUTPUT      = "OUTPUT"
	FORWARD     = "FORWARD"
	PREROUTING  = "PREROUTING"
	POSTROUTING = "POSTROUTING"
)

......

tools/istio-iptables/pkg/cmd/root.go

var rootCmd = &cobra.Command{
	Use:   "istio-iptables",
	Short: "Set up iptables rules for Istio Sidecar",
	Long:  "Script responsible for setting up port forwarding for Istio sidecar.",
	Run: func(cmd *cobra.Command, args []string) {
		cfg := constructConfig()
		var ext dep.Dependencies
		if cfg.DryRun {
			ext = &dep.StdoutStubDependencies{}
		} else {
			ext = &dep.RealDependencies{}
		}

		iptConfigurator := NewIptablesConfigurator(cfg, ext)
		if !cfg.SkipRuleApply {
            // Entry to rule execution
			iptConfigurator.run()
		}
	}
}
func (iptConfigurator *IptablesConfigurator) run() {
	
	iptConfigurator.logConfig()

	// ... 10000 words are omitted here

	// Create a new chain for redirecting outbound traffic to the common Envoy port.
	// In both chains, '-j RETURN' bypasses Envoy and '-j ISTIOREDIRECT'
	// redirects to Envoy.
	iptConfigurator.iptables.AppendRuleV4(
		constants.ISTIOREDIRECT, constants.NAT, "-p", constants.TCP, "-j", constants.REDIRECT, "--to-ports", iptConfigurator.cfg.ProxyPort)
	// Use this chain also for redirecting inbound traffic to the common Envoy port
	// when not using TPROXY.

	iptConfigurator.iptables.AppendRuleV4(constants.ISTIOINREDIRECT, constants.NAT, "-p", constants.TCP, "-j", constants.REDIRECT,
		"--to-ports", iptConfigurator.cfg.InboundCapturePort)

	iptConfigurator.handleInboundPortsInclude()

	// TODO: change the default behavior to not intercept any output - user may use http_proxy or another
	// iptablesOrFail wrapper (like ufw). Current default is similar with 0.1
	// Jump to the ISTIOOUTPUT chain from OUTPUT chain for all tcp traffic.
	iptConfigurator.iptables.AppendRuleV4(constants.OUTPUT, constants.NAT, "-p", constants.TCP, "-j", constants.ISTIOOUTPUT)
	// Apply port based exclusions. Must be applied before connections back to self are redirected.
	if iptConfigurator.cfg.OutboundPortsExclude != "" {
		for _, port := range split(iptConfigurator.cfg.OutboundPortsExclude) {
			iptConfigurator.iptables.AppendRuleV4(constants.ISTIOOUTPUT, constants.NAT, "-p", constants.TCP, "--dport", port, "-j", constants.RETURN)
		}
	}

	// 127.0.0.6 is bind connect from inbound passthrough cluster
	iptConfigurator.iptables.AppendRuleV4(constants.ISTIOOUTPUT, constants.NAT, "-o", "lo", "-s", "127.0.0.6/32", "-j", constants.RETURN)

	
	// Skip redirection for Envoy-aware applications and
	// container-to-container traffic both of which explicitly use
	// localhost.
	iptConfigurator.iptables.AppendRuleV4(constants.ISTIOOUTPUT, constants.NAT, "-d", "127.0.0.1/32", "-j", constants.RETURN)
	// Apply outbound IPv4 exclusions. Must be applied before inclusions.
	for _, cidr := range ipv4RangesExclude.IPNets {
		iptConfigurator.iptables.AppendRuleV4(constants.ISTIOOUTPUT, constants.NAT, "-d", cidr.String(), "-j", constants.RETURN)
	}
    
    // ... 10000 words are omitted here
    
    // How to actually execute iptables
	iptConfigurator.executeCommands()
}

iptConfigurator.executeCommands() method execution can finally be traced to tools / istio iptables / PKG / dependencies/ implementation.go You can see the command line execution tool of go exec.Command To execute the os system command.

func (r *RealDependencies) execute(cmd string, redirectStdout bool, args ...string) error {
	//Execute the real iptables command
	externalCommand := exec.Command(cmd, args...)
	externalCommand.Stdout = os.Stdout
	//TODO Check naming and redirection logic
	if !redirectStdout {
		externalCommand.Stderr = os.Stderr
	}
	return externalCommand.Run()
}

After executing this command, istio init completes its mission.

iptables traffic interception part of a separate article to write.

istio-proxy

Through the opening, we can see that there is also the container of istio proxy

 Image:         docker.io/istio/proxyv2:1.6.0
    Image ID:      docker-pullable://istio/proxyv2@sha256:821cc14ad9a29a2cafb9e351d42096455c868f3e628376f1d0e1763c3ce72ca6
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --serviceCluster
      sleep.$(POD_NAMESPACE)
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --trust-domain=cluster.local
      --concurrency
      2
    State:          Running

We can view the contents of the image change through dockerhub https://hub.docker.com/r/istio/proxyv2/tags

Let's take a look at the corresponding image of version 1.6.0 Dockerfile portal . its location in istio source code is in pilot/docker/Dockerfile.proxyv2

ADD file:c3e6bb316dfa6b81dd4478aaa310df532883b1c0a14edeec3f63d641980c1789 in /

/bin/sh -c [ -z "$(apt-get indextargets)" ]
/bin/sh -c mkdir -p /run/systemd && echo 'docker' > /run/systemd/container
CMD ["/bin/bash"]
ENV DEBIAN_FRONTEND=noninteractive

// ... 10000 words are omitted here
COPY envoy /usr/local/bin/envoy
COPY pilot-agent /usr/local/bin/pilot-agent

ENTRYPOINT ["/usr/local/bin/pilot-agent"]

We can see that the envoy and pilot agent programs are added to the proxyv2 container, and the pilot agent is executed as the startup command. We combine the execution parameters of the machine and get the following command:

pilot-agent proxy sidecar --domain default.svc.cluster.local --serviceCluster sleep.default --proxyLogLevel=warning --proxyComponentLogLevel=misc:error --trust-domain=cluster.local --concurrency 2

So let's see what we will do after the command is executed? Refer to the above operation steps

minikube ssh
sudo -i
docker ps |grep sleep

d03a43d3f257        istio/proxyv2              "/usr/local/bin/pilo..."   3 hours ago         Up 3 hours                              k8s_istio-proxy_slee-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0
a5437e12f6ea        8c797666f87b               "/bin/sleep 3650d"       3 hours ago         Up 3 hours                              k8s_sleep_sleep-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0
efdbb69b77c0        k8s.gcr.io/pause:3.2       "/pause"                 3 hours ago         Up 3 hours                              k8s_POD_sleep-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0

This time, we need to work out how to enter the proxyv2 container d03a43d3f257 and check its internal running process

docker exec -it d03a43d3f257 /bin/bash
ps -ef | grep sleep

UID        PID  PPID  C STIME TTY          TIME CMD
istio-p+     1     0  0 04:14 ?        00:00:06 /usr/local/bin/pilot-agent proxy sidecar --domain default.svc.cluster.local --serviceCluster sleep.default --proxyLogLevel=warning --proxyComponentLogLevel=misc:error --trust-domain=cluster.local --concurrency 2

istio-p+    17     1  0 04:14 ?        00:00:26 /usr/local/bin/envoy -c etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster sleep.default --service-node sidecar~172.18.0.11~sleep-54f94cbff5-jmwtf.default~default.svc.cluster.local --max-obj-name-len 189 --local-address-ip-version v4 --log-format %Y-%m-%dT%T.%fZ.%l.envoy %n.%v -l warning --component-log-level misc:error --concurrency 2

By observing PID and PPID, we can see that the envoy program is started after the execution of pilot agent.

The source code entry of pilot agent command is in pilot / CMD / pilot agent/ main.go , the usage of this command can be found in Pilot agent command.

proxyCmd = &cobra.Command{
		Use:   "proxy",
		Short: "Envoy proxy agent",
		RunE: func(c *cobra.Command, args []string) error {
			// ... 10000 words are omitted here

			proxyConfig, err := constructProxyConfig()
			if out, err := gogoprotomarshal.ToYAML(&proxyConfig); err != nil {
				log.Infof("Failed to serialize to YAML: %v", err)
			
			// ... 10000 words are omitted here

			envoyProxy := envoy.NewProxy(envoy.ProxyConfig{
				Config:              proxyConfig,
				Node:                role.ServiceNode(),
				LogLevel:            proxyLogLevel,
				ComponentLogLevel:   proxyComponentLogLevel,
				PilotSubjectAltName: pilotSAN,
				MixerSubjectAltName: mixerSAN,
				NodeIPs:             role.IPAddresses,
				PodName:             podName,
				PodNamespace:        podNamespace,
				PodIP:               podIP,
				STSPort:             stsPort,
				ControlPlaneAuth:    proxyConfig.ControlPlaneAuthPolicy == meshconfig.AuthenticationPolicy_MUTUAL_TLS,
				DisableReportCalls:  disableInternalTelemetry,
				OutlierLogPath:      outlierLogPath,
				PilotCertProvider:   pilotCertProvider,
				ProvCert:            citadel.ProvCert,
			})

			agent := envoy.NewAgent(envoyProxy, features.TerminationDrainDuration())

			// Monitor the envoy start until the start is successful. The start logic is` agent.Restart `Medium
			watcher := envoy.NewWatcher(tlsCerts, agent.Restart)
			go watcher.Run(ctx)

			return agent.Run(ctx)
		},
	}
)

agent.Restart method

func (a *agent) Restart(config interface{}) {
	// Only one invoke agent is allowed to start at the same time
	a.restartMutex.Lock()
	defer a.restartMutex.Unlock()

	if reflect.DeepEqual(a.currentConfig, config) {
		// If there is no change to the configuration file, do nothing and exit directly
		a.mutex.Unlock()
		return
	}

	// If it is detected that the configuration file has changed, epoch version number + 1 will create a new instance of envy
	epoch := a.currentEpoch + 1
	log.Infof("Received new config, creating new Envoy epoch %d", epoch)
    
    // Start a new process to start envoy
	go a.runWait(config, epoch, abortCh)
}

go a.runWait(config, epoch, abortCh) method

func (a *agent) runWait(config interface{}, epoch int, abortCh <-chan error) {
	// Directly call proxy instance to start, wait for proxy start to complete
	err := a.proxy.Run(config, epoch, abortCh)
	a.proxy.Cleanup(epoch)
	a.statusCh <- exitStatus{epoch: epoch, err: err}
}

proxy.Run method

func (e *envoy) Run(config interface{}, epoch int, abort <-chan error) error {
	var fname string
	// If the startup parameter specifies a custom profile, the custom profile is used, otherwise the default profile is used
	if len(e.Config.CustomConfigFile) > 0 {
		fname = e.Config.CustomConfigFile
	} else {
        // Here, create the / etc/istio/proxy/envoy-rev0.json configuration file required for envoy startup
        // The 0 parameter will change with the number of restarts, but only the file name changes, and the configuration content is the same
		out, err := bootstrap.New(bootstrap.Config{
			Node:                e.Node,
			Proxy:               &e.Config,
			PilotSubjectAltName: e.PilotSubjectAltName,
			MixerSubjectAltName: e.MixerSubjectAltName,
			LocalEnv:            os.Environ(),
			NodeIPs:             e.NodeIPs,
			PodName:             e.PodName,
			PodNamespace:        e.PodNamespace,
			PodIP:               e.PodIP,
			STSPort:             e.STSPort,
			ControlPlaneAuth:    e.ControlPlaneAuth,
			DisableReportCalls:  e.DisableReportCalls,
			OutlierLogPath:      e.OutlierLogPath,
			PilotCertProvider:   e.PilotCertProvider,
			ProvCert:            e.ProvCert,
		}).CreateFileForEpoch(epoch)
		fname = out
	}
    
    // ... 10000 words are omitted here

	// Parameters required for envoy startup
    // That is -- restart epoch 0 -- drain-time-s 45 -- parent-shutdown-time-s 60
	args := e.args(fname, epoch, istioBootstrapOverrideVar.Get())

	// Familiar flavor, call system command to start envoy
    // e.Config.BinaryPath  The parameter value is / usr / local / bin / envy,
	// Refer to PKG / config / constants for related default constant values/ constants.go  This source file
	cmd := exec.Command(e.Config.BinaryPath, args...)
	
    // ... 10000 words are omitted here
}

In fact, the whole startup process is quite complex. Here is only the analysis of the most basic process of starting envoy. If you look at it, it also includes

  1. Start of SDS

  2. Poll metrics service start

  3. Monitor the process of hot start envoy after configuration update

  4. Receive the system kill Command and exit the envoy process gracefully

Application container

As for the start of application container, the start is the same as the start. There is no other dependency on Istio except for the limitation of protocol. As long as the application uses the protocol supported by Istio, it can be intercepted and managed by Istio. This is the strength of Istio. Currently, Istio supports automatic load balancing of HTTP, gRPC, WebSocket and TCP traffic.

reference

https://istio.io/zh/blog/2019/data-plane-setup/#traffic-flow-from-application-container-to-sidecar-proxy

https://jimmysong.io/blog/sidecar-injection-iptables-and-traffic-routing/

https://preliminary.istio.io/zh/docs/reference/commands/pilot-agent/

Topics: Programming iptables Docker curl SSL