[TOC] ResourceManager (RM)負責跟蹤集群中的資源,以及調度應用程式(例如,MapReduce作業)。在Hadoop 2.4之前,集群中只有一個ResourceManager,當其中一個宕機時,將影響整個集群。高可用性特性增加了冗餘的形式,即一個主動/備用的ResourceMa ...
目錄
ResourceManager (RM)負責跟蹤集群中的資源,以及調度應用程式(例如,MapReduce作業)。在Hadoop 2.4之前,集群中只有一個ResourceManager,當其中一個宕機時,將影響整個集群。高可用性特性增加了冗餘的形式,即一個主動/備用的ResourceManager對,以便可以進行故障轉移。
YARN HA的架構如下圖所示:
本例中,各節點的角色分配如下表所示:
節點 | 角色 |
---|---|
centos01 | ResourceManager NodeManager |
centos02 | ResourceManager NodeManager |
centos03 | NodeManager |
下麵將逐步講解YARN HA的配置步驟。
7.1 yarn-site.xm文件配置
(1)修改yarn-site.xm文件,加入以下內容:
<!--YARN HA配置-->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster1</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>centos01</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>centos02</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>centos01:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>centos02:8088</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>centos01:2181,centos02:2181,centos03:2181</value>
</property>
<property><!--啟用RM重啟的功能,預設為false-->
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
上述配置參數解析:
yarn.resourcemanager.ha.enabled:開啟RM HA功能。
yarn.resourcemanager.cluster-id:標識集群中的RM。如果設置該選項,需要確保所有的RMs在配置中都有自己的id。
yarn.resourcemanager.ha.rm-ids:RMs的邏輯id列表。可以自定義,此處設置為“rm1,rm2”。後面的配置將引用該id。
yarn.resourcemanager.hostname.rm1:指定RM對應的主機名。另外,可以設置RM的每個服務地址。
yarn.resourcemanager.webapp.address.rm1:指定RM的Web端訪問地址。
yarn.resourcemanager.zk-address:指定集成的ZooKeeper的服務地址。
yarn.resourcemanager.recovery.enabled:啟用RM重啟的功能,預設為false。
yarn.resourcemanager.store.class:用於狀態存儲的類,預設為org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore,基於Hadoop文件系統的實現。還可以為org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore,該類為基於ZooKeeper的實現。此處指定該類。
(2)yarn-site.xm文件配置好後,需要將其發送到集群中其它節點。
(3)接著上一章啟動好的HDFS,繼續進行啟動YARN。
分別在centos01、centos02節點上執行以下命令,啟動ResourceManager:
[hadoop@centos01 hadoop-2.7.1]$ sbin/yarn-daemon.sh start resourcemanager
分別在centos01、centos02、centos03節點上執行以下命令,啟動nodemanager:
[hadoop@centos01 hadoop-2.7.1]$ sbin/yarn-daemon.sh start nodemanager
(4)YARN啟動後,查看各節點Java進程:
[hadoop@centos01 hadoop-2.7.1]$ jps
3360 QuorumPeerMain
4080 DFSZKFailoverController
4321 NodeManager
4834 Jps
3908 JournalNode
3702 DataNode
4541 ResourceManager
3582 NameNode
[hadoop@centos02 hadoop-2.7.1]$ jps
4486 Jps
3815 DFSZKFailoverController
4071 NodeManager
4359 ResourceManager
3480 NameNode
3353 QuorumPeerMain
3657 JournalNode
3563 DataNode
[hadoop@centos03 hadoop-2.7.1]$ jps
3496 JournalNode
4104 Jps
3836 NodeManager
3293 QuorumPeerMain
3390 DataNode
此時瀏覽器輸入地址http://centos01:8088 訪問活動狀態的ResourceManager,查看YARN的啟動狀態。如下圖所示。
如果訪問備份ResourceManager地址:http://centos02:8088 發現自動跳轉到了地址http://centos01:8088。這是因為此時活動狀態的ResourceManager在centos01節點上。訪問備份節點的ResourceManager會自動跳轉到活動節點。
7.2 測試YARN自動故障轉移
在centos01節點上執行MapReduce預設的WordCount程式,當正在執行map階段時,新開一個SSH Shell視窗,殺掉centos01的ResourceManager進程,觀察程式執行過程。執行MapReduce預設的WordCount程式的命令如下:
[hadoop@centos01 hadoop-2.7.1]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
執行結果如下:
[hadoop@centos01 hadoop-2.7.1]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
18/03/16 10:48:22 INFO input.FileInputFormat: Total input paths to process : 1
18/03/16 10:48:22 INFO mapreduce.JobSubmitter: number of splits:1
18/03/16 10:48:23 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1521168402181_0001
18/03/16 10:48:23 INFO impl.YarnClientImpl: Submitted application application_1521168402181_0001
18/03/16 10:48:23 INFO mapreduce.Job: The url to track the job: http://centos01:8088/proxy/application_1521168402181_0001/
18/03/16 10:48:23 INFO mapreduce.Job: Running job: job_1521168402181_0001
18/03/16 10:48:56 INFO mapreduce.Job: Job job_1521168402181_0001 running in uber mode : false
18/03/16 10:48:57 INFO mapreduce.Job: map 0% reduce 0%
18/03/16 10:50:21 INFO mapreduce.Job: map 100% reduce 0%
18/03/16 10:50:32 INFO mapreduce.Job: map 100% reduce 100%
18/03/16 10:50:36 INFO mapreduce.Job: Job job_1521168402181_0001 completed successfully
18/03/16 10:50:37 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=1321
FILE: Number of bytes written=239335
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=1094
HDFS: Number of bytes written=971
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=14130
Total time spent by all reduces in occupied slots (ms)=7851
Total time spent by all map tasks (ms)=14130
Total time spent by all reduce tasks (ms)=7851
Total vcore-seconds taken by all map tasks=14130
Total vcore-seconds taken by all reduce tasks=7851
Total megabyte-seconds taken by all map tasks=14469120
Total megabyte-seconds taken by all reduce tasks=8039424
Map-Reduce Framework
Map input records=29
Map output records=109
Map output bytes=1368
Map output materialized bytes=1321
Input split bytes=101
Combine input records=109
Combine output records=86
Reduce input groups=86
Reduce shuffle bytes=1321
Reduce input records=86
Reduce output records=86
Spilled Records=172
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=188
CPU time spent (ms)=1560
Physical memory (bytes) snapshot=278478848
Virtual memory (bytes) snapshot=4195344384
Total committed heap usage (bytes)=140480512
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=993
File Output Format Counters
Bytes Written=971
從上述結果中可以看出,雖然ResourceManager進程被殺掉了,但是YARN仍然能夠流暢的執行,說明自動故障轉移功能生效了,ResourceManager遇到故障後,自動切換到了centos02節點上繼續執行。此時瀏覽器訪問備用ResourceManager的Web端地址http://centos02:8088發現可以成功訪問了。顯示任務成功執行完畢。
到此,YARN HA集群搭建完畢。
原創文章,轉載請註明出處!!