通過上一篇 Istio Sidecar註入原理 文章可以發現,在應用提交到kubernate部署時已經同時註入了Sidecar應用。 細心的話應該還可以發現,除了註入了istio-proxy應用外,另外還有註入一個istio-init的 Init Containers。接下來一起來看看在這兩個註入... ...
開篇
通過上一篇 Istio Sidecar註入原理 文章可以發現,在應用提交到kubernate部署時已經同時註入了Sidecar應用。
細心的話應該還可以發現,除了註入了istio-proxy
應用外,另外還有註入一個istio-init
的 Init Containers
。接下來一起來看看在這兩個註入的容器中分別都有做一些什麼操作。
istio-init
istio-init
init 容器用於設置 iptables 規則,以便將入站/出站流量通過 sidecar 代理。初始化容器與應用程式容器在以下方面有所不同:
- 它在啟動應用容器之前運行,並一直運行直至完成。
- 如果有多個初始化容器,則每個容器都應在啟動下一個容器之前成功完成
我們可以看下sleep對應的pod
kubectl describe pod sleep-54f94cbff5-jmwtf
Name: sleep-54f94cbff5-jmwtf
Namespace: default
Priority: 0
Node: minikube/172.17.0.3
Start Time: Wed, 27 May 2020 12:14:08 +0800
Labels: app=sleep
istio.io/rev=
pod-template-hash=54f94cbff5
security.istio.io/tlsMode=istio
Annotations: sidecar.istio.io/interceptionMode: REDIRECT
sidecar.istio.io/status:
{"version":"d36ff46d2def0caba37f639f09514b17c4e80078f749a46aae84439790d2b560","initContainers":["istio-init"],"containers":["istio-proxy"]...
traffic.sidecar.istio.io/excludeInboundPorts: 15020
traffic.sidecar.istio.io/includeOutboundIPRanges: *
Status: Running
IP: 172.18.0.11
IPs:
IP: 172.18.0.11
Controlled By: ReplicaSet/sleep-54f94cbff5
Init Containers:
istio-init:
Container ID: docker://f5c88555b666c18e5aa343b3f452355f96d66dc4268fa306f93432e0f98c3950
Image: docker.io/istio/proxyv2:1.6.0
Image ID: docker-pullable://istio/proxyv2@sha256:821cc14ad9a29a2cafb9e351d42096455c868f3e628376f1d0e1763c3ce72ca6
Port: <none>
Host Port: <none>
Args:
istio-iptables
-p
15001
-z
15006
-u
1337
-m
REDIRECT
-i
*
-x
-b
*
-d
15090,15021,15020
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 27 May 2020 12:14:12 +0800
Finished: Wed, 27 May 2020 12:14:13 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 10m
memory: 10Mi
Environment:
DNS_AGENT:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from sleep-token-zq2wv (ro)
Containers:
sleep:
Container ID: docker://a5437e12f6ea25d828531ba0dc4fab78374d5e9f746b6a199c4ed03b5d53c8f7
Image: governmentpaas/curl-ssl
Image ID: docker-pullable://governmentpaas/curl-ssl@sha256:b8d0e024380e2a02b557601e370be6ceb8b56b64e80c3ce1c2bcbd24f5469a23
Port: <none>
Host Port: <none>
Command:
/bin/sleep
3650d
State: Running
Started: Wed, 27 May 2020 12:14:14 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/sleep/tls from secret-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from sleep-token-zq2wv (ro)
istio-proxy:
Container ID: docker://d03a43d3f257c057b664cf7ab03bcd301799a9e849da35fe54fdb0c9ea5516a4
Image: docker.io/istio/proxyv2:1.6.0
Image ID: docker-pullable://istio/proxyv2@sha256:821cc14ad9a29a2cafb9e351d42096455c868f3e628376f1d0e1763c3ce72ca6
Port: 15090/TCP
Host Port: 0/TCP
Args:
proxy
sidecar
--domain
$(POD_NAMESPACE).svc.cluster.local
--serviceCluster
sleep.$(POD_NAMESPACE)
--proxyLogLevel=warning
--proxyComponentLogLevel=misc:error
--trust-domain=cluster.local
--concurrency
2
State: Running
Started: Wed, 27 May 2020 12:14:17 +0800
Ready: True
Restart Count: 0
從輸出中可以看出,istio-init
容器的 State
為 Terminated
,而 Reason
是 Completed
。只有兩個容器是運行的,主應用程式 curl-ssl
容器和 istio-proxyv2
容器。
讓我們格式化istio-init
對應的 Args 參數,發現它執行瞭如下命令
istio-iptables -p 15001 -z 15006 -u 1337 -m REDIRECT -i * -x -b * -d 15090,15021,15020
可以看到 istio-init
容器的入口是 istio-iptables
命令行, 它是一個go編譯出來的二進位文件,該二進位文件會調用iptables命令創建了一些列iptables規則來劫持Pod中的流量。命令行工具源碼入口在 tools/istio-iptables/main.go
中。接下來我們看看它具體操作的iptables規則有哪些。
本文運行在minikube
上,因為istio-init
容器在初始化完成後就會退出,所以是沒辦法直接登入該容器的。但是它應用的iptables的規則會在同一Pod內其他容器上看到,我們可以登錄該Pod其他容器查看對應的規則,執行命令如下:
進入 minikube 並切換為 root 用戶
minikube ssh
sudo -i
查看sleep應用
相關的容器
docker ps | grep sleep
d03a43d3f257 istio/proxyv2 "/usr/local/bin/pilo…" 2 hours ago Up 2 hours k8s_istio-proxy_slee-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0
a5437e12f6ea 8c797666f87b "/bin/sleep 3650d" 2 hours ago Up 2 hours k8s_sleep_sleep-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0
efdbb69b77c0 k8s.gcr.io/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_sleep-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0
挑選上述容器中的其中一個,查看其進程ID,這裡8533
為其進程ID。這裡如果直接進入其docker容器執行ssh是無法獲取到其iptables規則的,因為其許可權不足。
iptables -t nat -L -v
iptables v1.6.1: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
需要通過nsenter提權查看其對應規則,nsenter命令詳解。
docker inspect efdbb69b77c0 --format '{{ .State.Pid }}'
8533
nsenter -t 8533 -n iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N ISTIO_INBOUND
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-N ISTIO_REDIRECT
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp -m tcp --dport 22 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A ISTIO_OUTPUT -s 127.0.0.6/32 -o lo -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
查看 NAT 表中規則配置的詳細信息
nsenter -t 8533 -n iptables -t nat -L -v
Chain PREROUTING (policy ACCEPT 3435 packets, 206K bytes)
pkts bytes target prot opt in out source destination
3435 206K ISTIO_INBOUND tcp -- any any anywhere anywhere
Chain INPUT (policy ACCEPT 3435 packets, 206K bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 599 packets, 54757 bytes)
pkts bytes target prot opt in out source destination
22 1320 ISTIO_OUTPUT tcp -- any any anywhere anywhere
Chain POSTROUTING (policy ACCEPT 599 packets, 54757 bytes)
pkts bytes target prot opt in out source destination
Chain ISTIO_INBOUND (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN tcp -- any any anywhere anywhere tcp dpt:22
1 60 RETURN tcp -- any any anywhere anywhere tcp dpt:15090
3434 206K RETURN tcp -- any any anywhere anywhere tcp dpt:15021
0 0 RETURN tcp -- any any anywhere anywhere tcp dpt:15020
0 0 ISTIO_IN_REDIRECT tcp -- any any anywhere anywhere
Chain ISTIO_IN_REDIRECT (3 references)
pkts bytes target prot opt in out source destination
0 0 REDIRECT tcp -- any any anywhere anywhere redir ports 15006
Chain ISTIO_OUTPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- any lo 127.0.0.6 anywhere
0 0 ISTIO_IN_REDIRECT all -- any lo anywhere !localhost owner UID match 1337
0 0 RETURN all -- any lo anywhere anywhere ! owner UID match 1337
22 1320 RETURN all -- any any anywhere anywhere owner UID match 1337
0 0 ISTIO_IN_REDIRECT all -- any lo anywhere !localhost owner GID match 1337
0 0 RETURN all -- any lo anywhere anywhere ! owner GID match 1337
0 0 RETURN all -- any any anywhere anywhere owner GID match 1337
0 0 RETURN all -- any any anywhere localhost
0 0 ISTIO_REDIRECT all -- any any anywhere anywhere
Chain ISTIO_REDIRECT (1 references)
pkts bytes target prot opt in out source destination
0 0 REDIRECT tcp -- any any anywhere anywhere redir ports 15001
關於 iptables 規則請參考 iptables 命令
回過頭來看下對應go源碼
tools/istio-iptables/pkg/constants/constants.go
// Constants for iptables commands
const (
IPTABLES = "iptables"
IPTABLESRESTORE = "iptables-restore"
IPTABLESSAVE = "iptables-save"
IP6TABLES = "ip6tables"
IP6TABLESRESTORE = "ip6tables-restore"
IP6TABLESSAVE = "ip6tables-save"
IP = "ip"
)
// iptables tables
const (
MANGLE = "mangle"
NAT = "nat"
FILTER = "filter"
)
// Built-in iptables chains
const (
INPUT = "INPUT"
OUTPUT = "OUTPUT"
FORWARD = "FORWARD"
PREROUTING = "PREROUTING"
POSTROUTING = "POSTROUTING"
)
......
tools/istio-iptables/pkg/cmd/root.go
var rootCmd = &cobra.Command{
Use: "istio-iptables",
Short: "Set up iptables rules for Istio Sidecar",
Long: "Script responsible for setting up port forwarding for Istio sidecar.",
Run: func(cmd *cobra.Command, args []string) {
cfg := constructConfig()
var ext dep.Dependencies
if cfg.DryRun {
ext = &dep.StdoutStubDependencies{}
} else {
ext = &dep.RealDependencies{}
}
iptConfigurator := NewIptablesConfigurator(cfg, ext)
if !cfg.SkipRuleApply {
// 規則執行的入口
iptConfigurator.run()
}
}
}
func (iptConfigurator *IptablesConfigurator) run() {
iptConfigurator.logConfig()
// ...此處省略1萬字...
// Create a new chain for redirecting outbound traffic to the common Envoy port.
// In both chains, '-j RETURN' bypasses Envoy and '-j ISTIOREDIRECT'
// redirects to Envoy.
iptConfigurator.iptables.AppendRuleV4(
constants.ISTIOREDIRECT, constants.NAT, "-p", constants.TCP, "-j", constants.REDIRECT, "--to-ports", iptConfigurator.cfg.ProxyPort)
// Use this chain also for redirecting inbound traffic to the common Envoy port
// when not using TPROXY.
iptConfigurator.iptables.AppendRuleV4(constants.ISTIOINREDIRECT, constants.NAT, "-p", constants.TCP, "-j", constants.REDIRECT,
"--to-ports", iptConfigurator.cfg.InboundCapturePort)
iptConfigurator.handleInboundPortsInclude()
// TODO: change the default behavior to not intercept any output - user may use http_proxy or another
// iptablesOrFail wrapper (like ufw). Current default is similar with 0.1
// Jump to the ISTIOOUTPUT chain from OUTPUT chain for all tcp traffic.
iptConfigurator.iptables.AppendRuleV4(constants.OUTPUT, constants.NAT, "-p", constants.TCP, "-j", constants.ISTIOOUTPUT)
// Apply port based exclusions. Must be applied before connections back to self are redirected.
if iptConfigurator.cfg.OutboundPortsExclude != "" {
for _, port := range split(iptConfigurator.cfg.OutboundPortsExclude) {
iptConfigurator.iptables.AppendRuleV4(constants.ISTIOOUTPUT, constants.NAT, "-p", constants.TCP, "--dport", port, "-j", constants.RETURN)
}
}
// 127.0.0.6 is bind connect from inbound passthrough cluster
iptConfigurator.iptables.AppendRuleV4(constants.ISTIOOUTPUT, constants.NAT, "-o", "lo", "-s", "127.0.0.6/32", "-j", constants.RETURN)
// Skip redirection for Envoy-aware applications and
// container-to-container traffic both of which explicitly use
// localhost.
iptConfigurator.iptables.AppendRuleV4(constants.ISTIOOUTPUT, constants.NAT, "-d", "127.0.0.1/32", "-j", constants.RETURN)
// Apply outbound IPv4 exclusions. Must be applied before inclusions.
for _, cidr := range ipv4RangesExclude.IPNets {
iptConfigurator.iptables.AppendRuleV4(constants.ISTIOOUTPUT, constants.NAT, "-d", cidr.String(), "-j", constants.RETURN)
}
// ...此處省略1萬字...
// 真正執行iptables的方法
iptConfigurator.executeCommands()
}
iptConfigurator.executeCommands()
方法執行最終可以跟蹤到tools/istio-iptables/pkg/dependencies/implementation.go
中,可以看到就是利用的go 的命令行執行工具exec.Command
來執行的os系統命令。
func (r *RealDependencies) execute(cmd string, redirectStdout bool, args ...string) error {
//執行真正的iptables命令
externalCommand := exec.Command(cmd, args...)
externalCommand.Stdout = os.Stdout
//TODO Check naming and redirection logic
if !redirectStdout {
externalCommand.Stderr = os.Stderr
}
return externalCommand.Run()
}
執行此命令後,istio-init
就完成了它的使命。
iptables 進行流量攔截的部分單獨一篇文章來寫。
istio-proxy
通過開篇我們可以看到還有istio-proxy
這個容器
Image: docker.io/istio/proxyv2:1.6.0
Image ID: docker-pullable://istio/proxyv2@sha256:821cc14ad9a29a2cafb9e351d42096455c868f3e628376f1d0e1763c3ce72ca6
Port: 15090/TCP
Host Port: 0/TCP
Args:
proxy
sidecar
--domain
$(POD_NAMESPACE).svc.cluster.local
--serviceCluster
sleep.$(POD_NAMESPACE)
--proxyLogLevel=warning
--proxyComponentLogLevel=misc:error
--trust-domain=cluster.local
--concurrency
2
State: Running
我們可以通過dockerhub 查看改鏡像的內容 https://hub.docker.com/r/istio/proxyv2/tags
這裡我們一起看看對應鏡像1.6.0版本對應的Dockerfile傳送門 . 它在istio源碼的位置在pilot/docker/Dockerfile.proxyv2
ADD file:c3e6bb316dfa6b81dd4478aaa310df532883b1c0a14edeec3f63d641980c1789 in /
/bin/sh -c [ -z "$(apt-get indextargets)" ]
/bin/sh -c mkdir -p /run/systemd && echo 'docker' > /run/systemd/container
CMD ["/bin/bash"]
ENV DEBIAN_FRONTEND=noninteractive
// ...此處省略1萬字...
COPY envoy /usr/local/bin/envoy
COPY pilot-agent /usr/local/bin/pilot-agent
ENTRYPOINT ["/usr/local/bin/pilot-agent"]
我們看到裡面將envoy
,pilot-agent
程式添加進proxyv2
容器,並執行pilot-agent
作為啟動命令,我們合併器執行參數,得出如下命令:
pilot-agent proxy sidecar --domain default.svc.cluster.local --serviceCluster sleep.default --proxyLogLevel=warning --proxyComponentLogLevel=misc:error --trust-domain=cluster.local --concurrency 2
那麼我們接著看看該命令執行後會做什麼操作呢?參考上面的操作步驟
minikube ssh
sudo -i
docker ps |grep sleep
d03a43d3f257 istio/proxyv2 "/usr/local/bin/pilo…" 3 hours ago Up 3 hours k8s_istio-proxy_slee-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0
a5437e12f6ea 8c797666f87b "/bin/sleep 3650d" 3 hours ago Up 3 hours k8s_sleep_sleep-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0
efdbb69b77c0 k8s.gcr.io/pause:3.2 "/pause" 3 hours ago Up 3 hours k8s_POD_sleep-54f94cbff5-jmwtf_default_70c72535-cbfb-4201-af07-feb0948cc0c6_0
這次我們需要制定進入proxyv2
容器d03a43d3f257
並查看其內部運行的進程
docker exec -it d03a43d3f257 /bin/bash
ps -ef | grep sleep
UID PID PPID C STIME TTY TIME CMD
istio-p+ 1 0 0 04:14 ? 00:00:06 /usr/local/bin/pilot-agent proxy sidecar --domain default.svc.cluster.local --serviceCluster sleep.default --proxyLogLevel=warning --proxyComponentLogLevel=misc:error --trust-domain=cluster.local --concurrency 2
istio-p+ 17 1 0 04:14 ? 00:00:26 /usr/local/bin/envoy -c etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster sleep.default --service-node sidecar~172.18.0.11~sleep-54f94cbff5-jmwtf.default~default.svc.cluster.local --max-obj-name-len 189 --local-address-ip-version v4 --log-format %Y-%m-%dT%T.%fZ.%l.envoy %n.%v -l warning --component-log-level misc:error --concurrency 2
觀察PID與PPID可以發現,pilot-agent
執行後啟動了envoy
程式。
pilot-agent
命令源碼入口在pilot/cmd/pilot-agent/main.go
,該命令的用法可以查閱pilot-agent命令。
proxyCmd = &cobra.Command{
Use: "proxy",
Short: "Envoy proxy agent",
RunE: func(c *cobra.Command, args []string) error {
// ...此處省略1萬字...
proxyConfig, err := constructProxyConfig()
if out, err := gogoprotomarshal.ToYAML(&proxyConfig); err != nil {
log.Infof("Failed to serialize to YAML: %v", err)
// ...此處省略1萬字...
envoyProxy := envoy.NewProxy(envoy.ProxyConfig{
Config: proxyConfig,
Node: role.ServiceNode(),
LogLevel: proxyLogLevel,
ComponentLogLevel: proxyComponentLogLevel,
PilotSubjectAltName: pilotSAN,
MixerSubjectAltName: mixerSAN,
NodeIPs: role.IPAddresses,
PodName: podName,
PodNamespace: podNamespace,
PodIP: podIP,
STSPort: stsPort,
ControlPlaneAuth: proxyConfig.ControlPlaneAuthPolicy == meshconfig.AuthenticationPolicy_MUTUAL_TLS,
DisableReportCalls: disableInternalTelemetry,
OutlierLogPath: outlierLogPath,
PilotCertProvider: pilotCertProvider,
ProvCert: citadel.ProvCert,
})
agent := envoy.NewAgent(envoyProxy, features.TerminationDrainDuration())
// 監控envoy啟動直至啟動成功,啟動邏輯在`agent.Restart`中
watcher := envoy.NewWatcher(tlsCerts, agent.Restart)
go watcher.Run(ctx)
return agent.Run(ctx)
},
}
)
agent.Restart
方法
func (a *agent) Restart(config interface{}) {
// 同一時刻只允許一個envoy agent執行啟動
a.restartMutex.Lock()
defer a.restartMutex.Unlock()
if reflect.DeepEqual(a.currentConfig, config) {
// 如果配置文件沒有發生變更那麼什麼都不用做,直接退出
a.mutex.Unlock()
return
}
// 如果監控到配置文件發生了變更,那麼epoch版本號+1,創建新的envoy 實例
epoch := a.currentEpoch + 1
log.Infof("Received new config, creating new Envoy epoch %d", epoch)
// 啟動一個新的協程啟動envoy
go a.runWait(config, epoch, abortCh)
}
go a.runWait(config, epoch, abortCh)
方法
func (a *agent) runWait(config interface{}, epoch int, abortCh <-chan error) {
// 直接調用proxy實例啟動,等待proxy啟動完成
err := a.proxy.Run(config, epoch, abortCh)
a.proxy.Cleanup(epoch)
a.statusCh <- exitStatus{epoch: epoch, err: err}
}
proxy.Run
方法
func (e *envoy) Run(config interface{}, epoch int, abort <-chan error) error {
var fname string
// 如果啟動參數指定了自定義的配置文件那麼使用自定義的配置文件,否則使用預設的配置
if len(e.Config.CustomConfigFile) > 0 {
fname = e.Config.CustomConfigFile
} else {
// 這裡創建envoy 啟動需要的/etc/istio/proxy/envoy-rev0.json 配置文件
// 其中的0這個參數會隨著重啟的次數跟著+1變動,但僅僅是文件名發生變動,裡面的配置內容還是一樣
out, err := bootstrap.New(bootstrap.Config{
Node: e.Node,
Proxy: &e.Config,
PilotSubjectAltName: e.PilotSubjectAltName,
MixerSubjectAltName: e.MixerSubjectAltName,
LocalEnv: os.Environ(),
NodeIPs: e.NodeIPs,
PodName: e.PodName,
PodNamespace: e.PodNamespace,
PodIP: e.PodIP,
STSPort: e.STSPort,
ControlPlaneAuth: e.ControlPlaneAuth,
DisableReportCalls: e.DisableReportCalls,
OutlierLogPath: e.OutlierLogPath,
PilotCertProvider: e.PilotCertProvider,
ProvCert: e.ProvCert,
}).CreateFileForEpoch(epoch)
fname = out
}
// ...此處省略1萬字...
// envoy 啟動需要的參數
// 也就是 --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60...這部分內容
args := e.args(fname, epoch, istioBootstrapOverrideVar.Get())
// 很熟悉的味道,調用系統命令啟動envoy
// e.Config.BinaryPath 參數值為 /usr/local/bin/envoy,
// 相關預設常量值可以查閱 pkg/config/constants/constants.go 這個源文件
cmd := exec.Command(e.Config.BinaryPath, args...)
// ...此處省略1萬字...
}
整個啟動過程其實挺複雜的,這裡只是分析了最基礎的啟動envoy的流程。如果細看裡面還包括
-
SDS的啟動
-
polit 度量指標服務啟動
-
監控配置更新後熱啟動envoy的流程
-
收到系統kill命令優雅退出envoy的流程
應用容器
至於應用容器的啟動,該咋啟動就咋啟動,除了協議的限制外沒有其他對Istio任何依賴,只要應用使用的是Istio支持的協議,都可以被Istio攔截並管理流量。這也就是Istio的強大之處。目前Istio支持為HTTP、gRPC、WebSocket 和 TCP 流量自動負載均衡。
參考文獻
https://jimmysong.io/blog/sidecar-injection-iptables-and-traffic-routing/
https://preliminary.istio.io/zh/docs/reference/commands/pilot-agent/