為了向別人、向世界證明自己而努力拼搏,而一旦你真的取得了成績,才會明白:人無須向別人證明什麼,只要你能超越自己。 ...
本文是Spring Cloud專欄的第九篇文章,瞭解前八篇文章內容有助於更好的理解本文:
一、Sleuth前言
隨著業務的發展,系統規模也會變得越來越大,各微服務間的調用關係也變得越來越錯綜複雜。通常一個由客戶端發起的請求在後端系統中會經過多個不同的微服務調用來協同產生最後的請求結果,在複雜的微服務架構系統中,幾乎每一個前端請求都會形成一條複雜的分散式服務調用鏈路,在每條鏈路中任何一個依賴服務出現延遲過高或錯誤的時候都有可能引起請求最後的失敗。這時候, 對於每個請求,全鏈路調用的跟蹤就變得越來越重要,通過實現對請求調用的跟蹤可以幫助我們快速發現錯誤根源以及監控分析每條請求鏈路上的性能瓶頸等。
Spring Cloud組件中Spring Cloud Sleuth提供了一套完整的解決方案,下麵將介紹Spring Cloud Sleuth的應用
二、Sleuth快速入門
1、為了保持其他模塊的整潔性,重新搭建一個消費者(springcloud-consumer-sleuth),提供者(springcloud-consumer-sleuth),消費者和提供者都是和前面所用的都一樣沒有什麼區別,註冊中心還是使用前面案例的註冊中心(springcloud-eureka-server/8700),詳細查看案例源碼。
2、完成以上工作之後,我們為服務提供者和服務消費者添加跟蹤功能,通過Spring Cloud Sleuth的封裝,我們為應用增加服務跟蹤能力的操作非常方便,只需要在服務提供者和服務消費者增加spring-cloud-starter-sleuth依賴即可
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> </dependency>
3、訪問消費者介面,然後查看控制台日誌顯示
消費者(springcloud-consumer-sleuth)列印的日誌
2019-12-05 12:30:20.178 INFO [springcloud-consumer-sleuth,f6fb983680aab32b,f6fb983680aab32b,false] 8992 --- [nio-9090-exec-1] c.s.controller.SleuthConsumerController : === consumer hello ===
提供者(springcloud-provider-sleuth)列印的日誌
2019-12-05 12:30:20.972 INFO [springcloud-provider-sleuth,f6fb983680aab32b,c70932279d3b3a54,false] 788 --- [nio-8080-exec-1] c.s.controller.SleuthProviderController : === provider hello ===
從上面的控制台輸出內容中,我們可以看到多了一些形如 [springcloud-consumer-sleuth,f6fb983680aab32b,c70932279d3b3a54,false]的日誌信息,而這些元素正是實現分散式服務跟蹤的重要組成部分,每個值的含義如下所述:
-
第一個值: springcloud-consumer-sleuth,它記錄了應用的名稱,也就是application properties 中spring.application.name參數配置的屬性
-
第二個值:f6fb983680aab32b, Spring Cloud Sleuth生成的一個ID,稱為Trace ID, 它用來標識一條請求鏈路。一條請求鏈路中包含一個Trace ID,多個Span ID
-
第三個值:c70932279d3b3a54, Spring Cloud Sleuth生成的另外一個ID,稱為Span ID,它表示一個基本的工作單元,比如發送一個HTTP請求
-
第四個值: false,表示是否要將該信息輸出到Zipkin等服務中來收集和展示。上面四個值中的Trace ID和Span ID是Spring Cloud Sleuth實現分散式服務跟蹤的核心,在一次服務請求鏈路的調用過程中,會保持並傳遞同一個Trace ID,從而將整個分佈於不同微服務進程中的請求跟蹤信息串聯起來。以上面輸出內容為例springcloud-consumer-sleuth和springcloud-provider-sleuth同屬於一個前端服務請求資源,所以他們的Trace ID是相同的,處於同一條請求鏈路中。
三、跟蹤原理
分散式系統中的服務跟蹤在理論上並不複雜,主要包括下麵兩個關鍵點:
-
為了實現請求跟蹤,當請求發送到分散式系統的入口端點時,只需要服務跟蹤框架為該請求創建一個唯一的跟蹤標識,同時在分散式系統內部流轉的時候,框架始終保持傳遞該唯一標識,直到返回給請求方為止,這個唯一標識就是前文中提到的Trace ID。通過Trace ID的記錄,我們就能將所有請求過程的日誌關聯起來
-
為了統計各處理單元的時間延遲,當請求到達各個服務組件時,或是處理邏輯到達某個狀態時,也通過一個唯一標識來標記它的開始、具體過程以及結束,該標識就是前面提到的Span ID。對於每個Span來說,它必須有開始和結束兩個節點,通過記錄開始Span和結束Span的時間戳,就能統計出該Span的時間延遲,除了時間 戳記錄之外,它還可以包含一些其他元數據,比如事件名稱、請求信息等
在【二、sleuth快速入門】示例中,我們輕鬆實現了日誌級別的跟蹤信息接入,這完全歸功於spring-cloud-starter-sleuth組件的實現,在SpringBoot應用中通過在工程中引入spring-cloud-starter-sleuth依賴之後,他會自動為當前應用構建起各通信通道的跟蹤機制,比如:
-
通過RabbitMQ、Kafka(或者其他任何Spring Cloud Stream綁定器實現的消息中間件)傳遞的請求
-
通過Zuul代理傳遞的請求
-
通過RestTemplate發起的請求
在【二、sleuth快速入門】示例中,由於springcloud-consumer-sleuth對springcloud-provider-sleuth發起的請求是通過RestTemplate實現的,所以spring-cloud-starter-sleuth組件會對該請求進行處理。在發送到springcloud-provider-sleuth之前,Sleuth會在該請求的Header中增加實現跟蹤需要的重要信息,主要有下麵這幾個(更多關於頭信息的定義可以通過查看org.springframework.cloud.sleuth.Span的源碼獲取)。
-
X-B3-TraceId:一條請求鏈路( Trace)的唯一標識,必需的值。
-
X-B3- SpanId:一個工作單元(Span)的唯一標識,必需的值。
-
X-B3- ParentSpanId:標識當前工作單元所屬的上一個工作單元, Root Span(請求鏈路的第一個工作單元)的該值為空。
-
X-B3-Sampled:是否被抽樣輸出的標誌,1表示需要被輸出,0表示不需要被輸出。
-
X-B3-Name:工作單元的名稱
可以通過對springcloud-provider-sleuth的實現做一些修改來輸出這些頭信息,具體如下:
private final Logger logger = Logger.getLogger(SleuthProviderController.class.getName()); @RequestMapping("/hello") public String hello(HttpServletRequest request){ logger.info("=== provider hello ===,Traced={"+request.getHeader("X-B3-TraceId")+"},SpanId={"+request.getHeader("X-B3- SpanId")+"}"); return "Trace"; }
通過上面的改造,再次重啟案例,然後訪問我們查看日誌,可以看到提供者輸出了正在處理的TraceId和SpanId信息。
消費者(springcloud-consumer-sleuth)列印的日誌
2019-12-05 13:15:01.457 INFO [springcloud-consumer-sleuth,41697d7fa118c150,41697d7fa118c150,false] 10036 --- [nio-9090-exec-2] c.s.controller.SleuthConsumerController : === consumer hello ===
提供者(springcloud-provider-sleuth)列印的日誌
2019-12-05 13:15:01.865 INFO [springcloud-provider-sleuth,41697d7fa118c150,863a1245c86b580e,false] 11088 --- [nio-8080-exec-1] c.s.controller.SleuthProviderController : === provider hello ===,Traced={41697d7fa118c150},SpanId={863a1245c86b580e}
四、抽樣收集
通過Trace ID和Span ID已經實現了對分散式系統中的請求跟蹤,而記錄的跟蹤信息最終會被分析系統收集起來,並用來實現對分散式系統的監控和分析功能,比如,預警延遲過長的請求鏈路、查詢請求鏈路的調用明細等。此時,我們在對接分析系統時就會碰到個問題:分析系統在收集跟蹤信息的時候,需要收集多少跟蹤信息才合適呢?
理論上來說,我們收集的跟蹤信息越多就可以越好地反映出系統的實際運行情況,並給出更精準的預警和分析。但是在高併發的分散式系統運行時,大量的請求調用會產生海量的跟蹤日誌信息,如果收集過多的跟蹤信息將會對整個分散式系統的性能造成一定的影響,同時保存大量的日誌信息也需要不少的存儲開銷。所以,在Sleuth中採用了抽象收集的方式來為跟蹤信息打上收集標識,也就是我們之前在日誌信息中看到的第4個布爾類型的值,他代表了該信息是否被後續的跟蹤信息收集器獲取和存儲。
public abstract class Sampler { /** * Returns true if the trace ID should be measured. * * @param traceId The trace ID to be decided on, can be ignored */ public abstract boolean isSampled(long traceId); }
通過實現isSampled方法, Spring Cloud Sleuth會在產生跟蹤信息的時候調用它來為跟蹤信息生成是否要被收集的標誌。需要註意的是,即使isSampled返回了false,它僅代表該跟蹤信息不被輸出到後續對接的遠程分析系統(比如Zipkin中,對於請求的跟蹤活動依然會進行,所以我們在日誌中還是能看到收集標識為fase的記錄。
預設情況下, Sleuth會使用SamplerProperties實現的抽樣策略,以請求百分比的方式配置和收集跟蹤信息。我們可以通過在application.yml中配置下麵的參數對其百分比值進行設置,它的預設值為0.1,代表收集10%的請求跟蹤信息。
spring:
sleuth:
sampler:
probability: 0.1
在開發調試期間,通常會收集全部跟蹤信息並輸出到遠程倉庫,我們可以將其值設置為1,或者也可以註入Sampler對象SamplerProperties策略,比如
@Bean public Sampler defaultSampler() { return Sampler.ALWAYS_SAMPLE; }
由於跟蹤日誌信息數據的價值往往僅在最近一段時間內非常有用,比如一周。那麼我們在設計抽樣策略時,主要考慮在不對系統造成明顯性能影響的情況下,以在日誌保留時間窗內充分利用存儲空間的原則來實現抽樣策略。
五、與Zipkin整合
由於日誌文件都離散地存儲在各個服務實例的文件系之上,僅通過查看日誌信息文件來分我們的請求鏈路依然是一件相當麻煩的事情,所以我們需要一些工具來幫助集中收集、存儲和搜索這些跟蹤信息,比如ELK日誌平臺,雖然通過ELK平臺提供的收集、存儲、搜索等強大功能,我們對跟蹤信息的管理和使用已經變得非常便利。但是在ELK平臺中的數據分析維度缺少對請求鏈路中各階段時間延遲的關註,很多時候我們追溯請求鏈路的一個原因是為了找出整個調用鏈路中出現延遲過高的瓶頸源,或為了實現對分散式系統做延遲監控等與時間消耗相關的需求,這時候類似ELK這樣的日誌分析系統就顯得有些乏力了。對於這樣的問題,我們就可以引入Zipkin來得以輕鬆解決。
Zipkin是Twitter的一個開源項目,它基於Google Dapper實現。我們可以使用它來收集各個伺服器上請求鏈路的跟蹤數據,並通過它提供的REST API介面來輔助査詢跟蹤數據以實現對分散式系統的監控程式,從而及時發現系統中出現的延遲升高問題並找出系統 性能瓶頸的根源。除了面向開發的API介面之外,它還提供了方便的UI組件來幫助我們直觀地搜索跟蹤信息和分析請求鏈路明細,比如可以査詢某段時間內各用戶請求的處理時間等。
下圖展示了Zipkin的基礎架構,他主要由4個核心組成:
-
Collector:收集器組件,它主要處理從外部系統發送過來的跟蹤信息,將這些信息轉換為 Zipkin內部處理的Span格式,以支持後續的存儲、分析、展示等功能。
-
Storage:存儲組件,它主要處理收集器接收到的跟蹤信息,預設會將這些信息存儲在記憶體中。我們也可以修改此存儲策略,通過使用其他存儲組件將跟蹤信息存儲到資料庫中。
-
RESTful API:API組件,它主要用來提供外部訪問介面。比如給客戶端展示跟蹤信息,或是外接系統訪問以實現監控等。
-
Web UI:UI組件,基於AP組件實現的上層應用。通過UI組件,用戶可以方便而又直觀地查詢和分析跟蹤信息。
1、構建server-zipkin
在Spring Cloud為F版本的時候,已經不需要自己構建Zipkin Server了,只需要下載jar即可,下載地址:https://dl.bintray.com/openzipkin/maven/io/zipkin/zipkin-server/
Zipkin的github地址:https://github.com/openzipkin
2、下載完成jar 包之後,需要運行jar,如下
java -jar zipkin-server-2.10.1-exec.jar
3、為應用引入和配置Zipkin服務
我們需要對應用做一些配置,以實現將跟蹤信息輸出到Zipkin Server。我們使用【二、sleuth快速入門】中實現的消費者(springcloud-consumer-sleuth),提供者(springcloud-provider-sleuth)為例,對他們進行改造,都加入整合Zipkin的依賴
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zipkin</artifactId> </dependency>
4、在消費者(springcloud-consumer-sleuth),提供者(springcloud-provider-sleuth)中增加Zipkin Server的配置信息,具體信息如下所示,預設是連接地址為:http://localhost:9411
spring:
zipkin:
base-url: http://localhost:9411
5、測試與分析
到這裡我們已經完成了配置Zipkin Server的所有基本工作,然後訪問幾次消費者介面http://localhost:9090/consumer/hello,當在日誌中出現跟蹤信息的最後一個值為true的時候,說明該跟蹤信息會輸出給Zipkin Server,如下日誌
2019-12-05 15:47:25.600 INFO [springcloud-consumer-sleuth,cbdbbebaf32355ab,cbdbbebaf32355ab,false] 8564 --- [nio-9090-exec-9] c.s.controller.SleuthConsumerController : === consumer hello ===
2019-12-05 15:47:27.483 INFO [springcloud-consumer-sleuth,8f332a4da3c05f62,8f332a4da3c05f62,false] 8564 --- [nio-9090-exec-6] c.s.controller.SleuthConsumerController : === consumer hello ===
2019-12-05 15:47:42.127 INFO [springcloud-consumer-sleuth,61b922906800ac60,61b922906800ac60,true] 8564 --- [nio-9090-exec-2] c.s.controller.SleuthConsumerController : === consumer hello ===
2019-12-05 15:47:42.457 INFO [springcloud-consumer-sleuth,1acae9ebecc4d36d,1acae9ebecc4d36d,false] 8564 --- [nio-9090-exec-4] c.s.controller.SleuthConsumerController : === consumer hello ===
2019-12-05 15:47:42.920 INFO [springcloud-consumer-sleuth,b2db9e00014ceb88,b2db9e00014ceb88,false] 8564 --- [nio-9090-exec-7] c.s.controller.SleuthConsumerController : === consumer hello ===
2019-12-05 15:47:43.457 INFO [springcloud-consumer-sleuth,ade4d5a7d97ca16b,ade4d5a7d97ca16b,false] 8564 --- [nio-9090-exec-9] c.s.controller.SleuthConsumerController : === consumer hello ===
所以此時可以在Zipkin Server的管理界面中選擇合適的查詢條件,單擊Find Traces按鈕,就可以查詢出剛纔在日誌中出現的跟蹤信息了(也可以根據日誌信息中的Treac ID,在頁面右上角的輸入框中來搜索),頁面如下所示:
點擊下方springcloud-consumer-sleuth端點的跟蹤信息,還可以得到Sleuth 跟蹤到的詳細信息,其中包括我們關註的請求時間消耗等。
點擊導航欄中的《依賴分析》菜單,還可以查看Zipkin Server根據跟蹤信息分析生成的系統請求鏈路依賴關係圖,如下所示
六、Zipkin將數據存儲到ElasticSearch中
在【五、與Zipkin整合】中鏈路收集的數據預設存儲在Zipkin服務的記憶體中,Zipkin服務一重啟這些數據就沒了,在開發環境中我們圖方便省方便可以直接將數據存儲到記憶體中,但是在生產環境,我們需要將這些數據持久化。我們可以將其存儲在MySQL中,實際使用中數據量可能會比較大,所以MySQL並不是一種很好的選擇,可以選擇用Elasticsearch來存儲數據,Elasticsearch在搜索方面有先天的優勢。
1、上面幾個步驟使用的 zipkin-server-2.10.1-exec.jar 是以前下載的,再此使用的zipkin server版本為2.19.2,下載地址:https://dl.bintray.com/openzipkin/maven/io/zipkin/zipkin-server/
2、zipkin-server-2.19.2-exec.jar 版本僅支持Elasticsearch5-7.x版本,註意版本對應,自行上elastic官網下載安裝Elasticsearch5-7.x版本,ES服務準備就緒完成之後。
3、啟動zipkin服務命令如下:
java -DSTORAGE_TYPE=elasticsearch -DES_HOSTS=http://47.112.11.147:9200 -jar zipkin-server-2.19.2-exec.jar
另外還有一些其它可配置參數,具體參考:https://github.com/openzipkin/zipkin/tree/master/zipkin-server#elasticsearch-storage
* `ES_HOSTS`: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200. Defaults to "http://localhost:9200". * `ES_PIPELINE`: Indicates the ingest pipeline used before spans are indexed. No default. * `ES_TIMEOUT`: Controls the connect, read and write socket timeouts (in milliseconds) for Elasticsearch Api. Defaults to 10000 (10 seconds) * `ES_INDEX`: The index prefix to use when generating daily index names. Defaults to zipkin. * `ES_DATE_SEPARATOR`: The date separator to use when generating daily index names. Defaults to '-'. * `ES_INDEX_SHARDS`: The number of shards to split the index into. Each shard and its replicas are assigned to a machine in the cluster. Increasing the number of shards and machines in the cluster will improve read and write performance. Number of shards cannot be changed for existing indices, but new daily indices will pick up changes to the setting. Defaults to 5. * `ES_INDEX_REPLICAS`: The number of replica copies of each shard in the index. Each shard and its replicas are assigned to a machine in the cluster. Increasing the number of replicas and machines in the cluster will improve read performance, but not write performance. Number of replicas can be changed for existing indices. Defaults to 1. It is highly discouraged to set this to 0 as it would mean a machine failure results in data loss. * `ES_USERNAME` and `ES_PASSWORD`: Elasticsearch basic authentication, which defaults to empty string. Use when X-Pack security (formerly Shield) is in place. * `ES_HTTP_LOGGING`: When set, controls the volume of HTTP logging of the Elasticsearch Api. Options are BASIC, HEADERS, BODY
4、我們修改springcloud-provider-sleuth,springcloud-consumer-sleuth的application.yml文件將抽樣概率修改為1,方便測試
spring:
sleuth:
sampler:
probability: 1
5、然後訪問http://localhost:9090/consumer/hello介面幾次,再次訪問kibana可以看到索引已經創建了
6、可以看到裡面已經存儲數據了
7、訪問zipkin可以看到信息
8、但是依賴中沒有任何信息
9、zipkin會在ES中創建以zipkin開頭日期結尾的索引,並且預設以天為單位分割,使用ES存儲模式時,zipkin中的依賴信息會無法顯示,通過zipkin官網可以看到,我們需要通過zipkin-dependencies工具包計算
10、zipkin-dependencies生成依賴鏈
zipkin-dependencies基於spark job來生成全局的調用鏈,此處下載
zipkin-dependencies的版本為2.4.1
github地址:https://github.com/openzipkin/zipkin-dependencies
下載地址:https://dl.bintray.com/openzipkin/maven/io/zipkin/dependencies/zipkin-dependencies/
11、下載完成之後啟動
這個jar包就不要再windows上啟動了,啟動不了,啟動到你懷疑人生。在linux上執行
官方網文檔給了個Linux案例:
STORAGE_TYPE=cassandra3 java -jar zipkin-dependencies.jar `date -u -d '1 day ago' +%F`
STORAGE_TYPE為存儲類型,我門這裡使用的是ES所以修改為elasticsearch,後面的date參數命令可以用來顯示或設定系統的日期與時間,不瞭解的自行百度。
啟動命令為:
ZIPKIN_LOG_LEVEL=DEBUG ES_NODES_WAN_ONLY=true STORAGE_TYPE=elasticsearch ES_HOSTS=http://47.112.11.147:9200 java -Xms256m -Xmx1024m -jar zipkin-dependencies-2.4.1.jar `date -u -d '1 day ago' +%F`
下載完成後通過上述命令啟動zipkin-dependencies,這裡要註意的是程式只會根據當日的zipkin數據實時計算一次依賴關係,我們是昨天(2019-12-17)收集到ES的數據,所以今天 (2019-12-18)我們在啟動命令中指定前一天,就能生成依賴數據以索引zipkin:dependency-2019-12-17方式存入ES中,然後就退出了(Done),因此要做到實時更新依賴的話需要周期性執行zipkin-dependencies,例如使用Linux中的crontab定時調度等等。
執行後日誌如下:
[root@VM_0_8_centos local]# ZIPKIN_LOG_LEVEL=DEBUG ES_NODES_WAN_ONLY=true STORAGE_TYPE=elasticsearch ES_HOSTS=http://47.112.11.147:9200 java -Xms256m -Xmx1024m -jar zipkin-dependencies-2.4.1.jar `date -u -d '1 day ago' +%F` 19/12/18 21:44:10 WARN Utils: Your hostname, VM_0_8_centos resolves to a loopback address: 127.0.0.1; using 172.21.0.8 instead (on interface eth0) 19/12/18 21:44:10 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address 19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: spark.ui.enabled=false 19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.index.read.missing.as.empty=true 19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.nodes.wan.only=true 19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.keystore.location= 19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.keystore.pass= 19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.truststore.location= 19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.truststore.pass= 19/12/18 21:44:10 INFO ElasticsearchDependenciesJob: Processing spans from zipkin:span-2019-12-17/span 19/12/18 21:44:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 19/12/18 21:44:12 WARN Java7Support: Unable to load JDK7 types (annotations, java.nio.file.Path): no Java7 support added 19/12/18 21:44:13 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:13 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:13 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:18 DEBUG DependencyLinker: building trace tree: traceId=a5253479e359638b 19/12/18 21:44:18 DEBUG DependencyLinker: traversing trace tree, breadth-first 19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"a5253479e359638b","id":"a5253479e359638b","kind":"SERVER","name":"get /consumer/hello","timestamp":1576591155280041,"duration":6191,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"remoteEndpoint":{"ipv6":"::1","port":62085},"tags":{"http.method":"GET","http.path":"/consumer/hello","mvc.controller.class":"SleuthConsumerController","mvc.controller.method":"hello"}} 19/12/18 21:44:18 DEBUG DependencyLinker: root's client is unknown; skipping 19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"a5253479e359638b","parentId":"a5253479e359638b","id":"8d6b8fb1bbb4f48c","kind":"CLIENT","name":"get","timestamp":1576591155281192,"duration":3999,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"tags":{"http.method":"GET","http.path":"/provider/hello"}} 19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"a5253479e359638b","parentId":"a5253479e359638b","id":"8d6b8fb1bbb4f48c","kind":"SERVER","name":"get /provider/hello","timestamp":1576591155284040,"duration":1432,"localEndpoint":{"serviceName":"springcloud-provider-sleuth","ipv4":"192.168.0.104"},"remoteEndpoint":{"ipv4":"192.168.0.104","port":62182},"tags":{"http.method":"GET","http.path":"/provider/hello","mvc.controller.class":"SleuthProviderController","mvc.controller.method":"hello"},"shared":true} 19/12/18 21:44:18 DEBUG DependencyLinker: found remote ancestor {"traceId":"a5253479e359638b","parentId":"a5253479e359638b","id":"8d6b8fb1bbb4f48c","kind":"CLIENT","name":"get","timestamp":1576591155281192,"duration":3999,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"tags":{"http.method":"GET","http.path":"/provider/hello"}} 19/12/18 21:44:18 DEBUG DependencyLinker: incrementing link springcloud-consumer-sleuth -> springcloud-provider-sleuth 19/12/18 21:44:18 DEBUG DependencyLinker: building trace tree: traceId=54af196ac59ee13e 19/12/18 21:44:18 DEBUG DependencyLinker: traversing trace tree, breadth-first 19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"54af196ac59ee13e","id":"54af196ac59ee13e","kind":"SERVER","name":"get /consumer/hello","timestamp":1576591134958091,"duration":139490,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"remoteEndpoint":{"ipv6":"::1","port":62085},"tags":{"http.method":"GET","http.path":"/consumer/hello","mvc.controller.class":"SleuthConsumerController","mvc.controller.method":"hello"}} 19/12/18 21:44:18 DEBUG DependencyLinker: root's client is unknown; skipping 19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"54af196ac59ee13e","parentId":"54af196ac59ee13e","id":"1a827ae864bd2399","kind":"CLIENT","name":"get","timestamp":1576591134962066,"duration":133718,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"tags":{"http.method":"GET","http.path":"/provider/hello"}} 19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"54af196ac59ee13e","parentId":"54af196ac59ee13e","id":"1a827ae864bd2399","kind":"SERVER","name":"get /provider/hello","timestamp":1576591135064214,"duration":37707,"localEndpoint":{"serviceName":"springcloud-provider-sleuth","ipv4":"192.168.0.104"},"remoteEndpoint":{"ipv4":"192.168.0.104","port":62089},"tags":{"http.method":"GET","http.path":"/provider/hello","mvc.controller.class":"SleuthProviderController","mvc.controller.method":"hello"},"shared":true} 19/12/18 21:44:18 DEBUG DependencyLinker: found remote ancestor {"traceId":"54af196ac59ee13e","parentId":"54af196ac59ee13e","id":"1a827ae864bd2399","kind":"CLIENT","name":"get","timestamp":1576591134962066,"duration":133718,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"tags":{"http.method":"GET","http.path":"/provider/hello"}} 19/12/18 21:44:18 DEBUG DependencyLinker: incrementing link springcloud-consumer-sleuth -> springcloud-provider-sleuth 19/12/18 21:44:18 INFO ElasticsearchDependenciesJob: Saving dependency links to zipkin:dependency-2019-12-17/dependency 19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release. 19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/