創建三台虛擬機,IP地址為:192.168.169.101,192.168.169.102,192.168.169.103 將192.168.169.102為namenode,192.168.169.101,192.168.169.103為datanode 關閉防火牆,安裝JDK1.8,設置SSH無 ...
創建三台虛擬機,IP地址為:192.168.169.101,192.168.169.102,192.168.169.103
將192.168.169.102為namenode,192.168.169.101,192.168.169.103為datanode
關閉防火牆,安裝JDK1.8,設置SSH無密碼登錄,下載Hadoop-2.8.2.tar.gz到/hadoop目錄下。
1 安裝namenode結點
將hadoop-2.8.2.tar.gz解壓到192.168.169.102的hadoop用戶的home目錄/hadoop下
[hadoop@hadoop02 ~]$ pwd /hadoop [hadoop@hadoop02 ~]$ tar zxvf hadoop-2.8.2.tar.gz ... ... [hadoop@hadoop02 ~]$ cd hadoop-2.8.2/ [hadoop@hadoop02 hadoop-2.8.2]$ pwd /hadoop/hadoop-2.8.2 [hadoop@hadoop02 hadoop-2.8.2]$ ls -l 總用量 132 drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 bin drwxr-xr-x 3 hadoop hadoop 19 10月 20 05:11 etc drwxr-xr-x 2 hadoop hadoop 101 10月 20 05:11 include drwxr-xr-x 3 hadoop hadoop 19 10月 20 05:11 lib drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 libexec -rw-r--r-- 1 hadoop hadoop 99253 10月 20 05:11 LICENSE.txt -rw-r--r-- 1 hadoop hadoop 15915 10月 20 05:11 NOTICE.txt -rw-r--r-- 1 hadoop hadoop 1366 10月 20 05:11 README.txt drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 sbin drwxr-xr-x 4 hadoop hadoop 29 10月 20 05:11 share [hadoop@hadoop02 hadoop-2.8.2]$
2 配置Hadoop環境變數
[hadoop@hadoop02 bin]$ vi /hadoop/.bash_profile export HADOOP_HOME=/hadoop/hadoop-2.8.2 export PATH=$PATH:$HADOOP_HOME/bin
註意:另兩台虛擬機也要同樣配置。
執行source ~./.bash_profile使配置生效,並驗證:
[hadoop@hadoop02 bin]$ source ~/.bash_profile [hadoop@hadoop02 bin]$ echo $HADOOP_HOME /hadoop/hadoop-2.8.2 [hadoop@hadoop02 bin]$ echo $PATH /usr/java/jdk1.8.0_151/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/hadoop/.local/bin:/hadoop/bin:/hadoop/.local/bin:/hadoop/bin:/hadoop/hadoop-2.8.2/bin [hadoop@hadoop02 bin]$
3 創建hadoop工作目錄
[hadoop@hadoop02 bin]$ mkdir -p /hadoop/hadoop/dfs/name /hadoop/hadoop/dfs/data /hadoop/hadoop/tmp
4 修改hadoop配製文件
共修改7個配製文件:
hadoop-env.sh: java環境變數
yarn-env.sh: 制定yarn框架的java運行環境,yarn它將資源管理和處理組件分開。基於yarn的架構不受MapReduce約束。
slaves: 指定datanode數據存儲伺服器
core-site.xml: 指定訪問hadoop web界面的路徑
hdfs-site.xml: 文件系統的配置文件
mapred-site.xml: MapReducer任務配置文件
yarn-site.xml: yarn框架配置,主要是一些任務的啟動位置
4.1 /hadoop/hadoop-2.8.2/etc/hadoop/hadoop-env.sh
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/hadoop-env.sh export JAVA_HOME=/usr/java/jdk1.8.0_151/
4.2 /hadoop/hadoop-2.8.2/etc/hadoop/yarn-env.sh
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/yarn-env.sh JAVA_HOME=/usr/java/jdk1.8.0_151/
4.3 /hadoop/hadoop-2.8.2/etc/hadoop/slaves
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/slaves hadoop01 hadoop03
4.4 /hadoop/hadoop-2.8.2/etc/hadoop/core-site.xml
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/core-site.xml <configuration> <property> <name>hadoop.tmp.dir</name> <value>/hadoop/hadoop/tmp</value> //手工創建的 <final>true</final> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://192.168.169.102:9000</value> <final>true</final> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> </configuration>
4.5 /hadoop/hadoop-2.8.2/etc/hadoop/hdfs-site.xml
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/hdfs-site.xml <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.name.dir</name> <value>/hadoop/hadoop/dfs/name</value> </property> <property> <name>dfs.data.dir</name> <value>/hadoop/hadoop/dfs/data</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop02:9001</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property>
4.6 /hadoop/hadoop-2.8.2/etc/hadoop/mapred-queues.xml
[hadoop@hadoop02 hadoop]$ cp mapred-site.xml.template mapred-site.xml [hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/mapred-site.xml <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop02:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop02:19888</value> </property>
4.7 /hadoop/hadoop-2.8.2/etc/hadoop/yarn-site.xml
<property> <name>yarn.nodemanager.aux-service</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-service.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>hadoop02:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop02:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>hadoop02:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>hadoop02:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop02:8088</value> </property>
5 安裝datanode結點
在192.168.169.102上
[hadoop@hadoop02 ~]$ scp -rp hadoop-2.8.2 hadoop@hadoop01:~/ [hadoop@hadoop02 ~]$ scp -rp hadoop-2.8.2 hadoop@hadoop03:~/
6 初始化namenode
[hadoop@hadoop02 ~]$ pwd /hadoop [hadoop@hadoop02 ~]$ ./hadoop-2.8.2/bin/hdfs namenode -format 17/11/05 21:10:43 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: user = hadoop STARTUP_MSG: host = hadoop02/192.168.169.102 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.8.2 STARTUP_MSG: classpath = /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/activation-1.1.jar:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/hadoop/hadoop- ...... STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 66c47f2a01ad9637879e95f80c41f798373828fb; compiled by 'jdu' on 2017-10-19T20:39Z STARTUP_MSG: java = 1.8.0_151 ************************************************************/ 17/11/05 21:10:43 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 17/11/05 21:10:43 INFO namenode.NameNode: createNameNode [-format] 17/11/05 21:10:43 WARN common.Util: Path /hadoop/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration. 17/11/05 21:10:43 WARN common.Util: Path /hadoop/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration. Formatting using clusterid: CID-206dbc0f-21a2-4c5e-bad1-c296ed9f705a 17/11/05 21:10:44 INFO namenode.FSEditLog: Edit logging is async:false 17/11/05 21:10:44 INFO namenode.FSNamesystem: KeyProvider: null 17/11/05 21:10:44 INFO namenode.FSNamesystem: fsLock is fair: true 17/11/05 21:10:44 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 17/11/05 21:10:44 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 17/11/05 21:10:44 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 17/11/05 21:10:44 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 17/11/05 21:10:44 INFO blockmanagement.BlockManager: The block deletion will start around 2017 十一月 05 21:10:44 17/11/05 21:10:44 INFO util.GSet: Computing capacity for map BlocksMap 17/11/05 21:10:44 INFO util.GSet: VM type = 64-bit 17/11/05 21:10:44 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 17/11/05 21:10:44 INFO util.GSet: capacity = 2^21 = 2097152 entries 17/11/05 21:10:44 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 17/11/05 21:10:44 INFO blockmanagement.BlockManager: defaultReplication = 2 17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxReplication = 512 17/11/05 21:10:44 INFO blockmanagement.BlockManager: minReplication = 1 17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 17/11/05 21:10:44 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 17/11/05 21:10:44 INFO blockmanagement.BlockManager: encryptDataTransfer = false 17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 17/11/05 21:10:44 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE) 17/11/05 21:10:44 INFO namenode.FSNamesystem: supergroup = supergroup 17/11/05 21:10:44 INFO namenode.FSNamesystem: isPermissionEnabled = false 17/11/05 21:10:44 INFO namenode.FSNamesystem: HA Enabled: false 17/11/05 21:10:44 INFO namenode.FSNamesystem: Append Enabled: true 17/11/05 21:10:45 INFO util.GSet: Computing capacity for map INodeMap 17/11/05 21:10:45 INFO util.GSet: VM type = 64-bit 17/11/05 21:10:45 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 17/11/05 21:10:45 INFO util.GSet: capacity = 2^20 = 1048576 entries 17/11/05 21:10:45 INFO namenode.FSDirectory: ACLs enabled? false 17/11/05 21:10:45 INFO namenode.FSDirectory: XAttrs enabled? true 17/11/05 21:10:45 INFO namenode.NameNode: Caching file names occurring more than 10 times 17/11/05 21:10:45 INFO util.GSet: Computing capacity for map cachedBlocks 17/11/05 21:10:45 INFO util.GSet: VM type = 64-bit 17/11/05 21:10:45 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 17/11/05 21:10:45 INFO util.GSet: capacity = 2^18 = 262144 entries 17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 17/11/05 21:10:45 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 17/11/05 21:10:45 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 17/11/05 21:10:45 INFO util.GSet: Computing capacity for map NameNodeRetryCache 17/11/05 21:10:45 INFO util.GSet: VM type = 64-bit 17/11/05 21:10:45 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 17/11/05 21:10:45 INFO util.GSet: capacity = 2^15 = 32768 entries 17/11/05 21:10:45 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1476203169-192.168.169.102-1509887445494 17/11/05 21:10:45 INFO common.Storage: Storage directory /hadoop/hadoop/dfs/name has been successfully formatted. 17/11/05 21:10:45 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression 17/11/05 21:10:45 INFO namenode.FSImageFormatProtobuf: Image file /hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds. 17/11/05 21:10:45 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 17/11/05 21:10:45 INFO util.ExitUtil: Exiting with status 0 17/11/05 21:10:45 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop02/192.168.169.102 ************************************************************/ [hadoop@hadoop02 ~]$
驗證
[hadoop@hadoop02 ~]$ cd /hadoop/hadoop/dfs/name/current [hadoop@hadoop02 current]$ pwd /hadoop/hadoop/dfs/name/current [hadoop@hadoop02 current]$ ls fsimage_0000000000000000000 fsimage_0000000000000000000.md5 seen_txid VERSION [hadoop@hadoop02 current]$
7 啟動HDSF
[hadoop@hadoop02 sbin]$ pwd /hadoop/hadoop-2.8.2/sbin [hadoop@hadoop02 sbin]$ ./start-dfs.sh Starting namenodes on [hadoop02] The authenticity of host 'hadoop02 (192.168.169.102)' can't be established. ECDSA key fingerprint is f7:ef:fb:e5:7e:0f:59:40:63:23:99:9a:ca:e2:03:e8. Are you sure you want to continue connecting (yes/no)? yes hadoop02: Warning: Permanently added 'hadoop02,192.168.169.102' (ECDSA) to the list of known hosts. hadoop02: starting namenode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-namenode-hadoop02.out hadoop03: starting datanode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-datanode-hadoop03.out hadoop01: starting datanode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-datanode-hadoop01.out Starting secondary namenodes [hadoop02] hadoop02: starting secondarynamenode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-secondarynamenode-hadoop02.out [hadoop@hadoop02 sbin]$
驗證
192.168.169.102上
[hadoop@hadoop02 sbin]$ ps -aux | grep namenode hadoop 13502 3.0 6.2 2820308 241808 ? Sl 21:18 0:09 /usr/java/jdk1.8.0_151//bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-namenode-hadoop02.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode hadoop 13849 2.1 4.5 2784012 174604 ? Sl 21:18 0:06 /usr/java/jdk1.8.0_151//bin/java -Dproc_secondarynamenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-secondarynamenode-hadoop02.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode hadoop 14264 0.0 0.0 112660 968 pts/1 S+ 21:23 0:00 grep --color=auto namenode
192.168.169.101上
[hadoop@hadoop01 hadoop]$ ps -aux | grep datanode hadoop 45401 24.5 4.0 2811244 165268 ? Sl 21:31 0:10 /usr/java/jdk1.8.0_151//bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-datanode-hadoop01.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode hadoop 45479 0.0 0.0 112660 968 pts/0 S+ 21:32 0:00 grep --color=auto datanode
192.168.169.103上
[hadoop@hadoop03 hadoop]$ ps -aux | grep datanode hadoop 10608 7.4 3.9 2806140 158464 ? Sl 21:31 0:08 /usr/java/jdk1.8.0_151//bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-datanode-hadoop03.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode hadoop 10757 0.0 0.0 112660 968 pts/0 S+ 21:33 0:00 grep --color=auto datanode
8 啟動yarn
[hadoop@hadoop02 sbin]$ ./start-yarn.sh starting yarn daemons starting resourcemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-resourcemanager-hadoop02.out hadoop01: starting nodemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-nodemanager-hadoop01.out hadoop03: starting nodemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-nodemanager-hadoop03.out
驗證
192.168.169.102上
[hadoop@hadoop02 sbin]$ ps -aux | grep resourcemanage hadoop 16256 21.6 7.1 2991540 277336 pts/1 Sl 21:36 0:22 /usr/java/jdk1.8.0_151//bin/java -Dproc_resourcemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager hadoop 16541 0.0 0.0 112660 972 pts/1 S+ 21:38 0:00 grep --color=auto resourcemanage
192.168.169.101上
[hadoop@hadoop01 hadoop]$ ps -aux | grep nodemanager hadoop 45543 10.9 6.6 2847708 267304 ? Sl 21:36 0:18 /usr/java/jdk1.8.0_151//bin/java -Dproc_nodemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.log.file=yarn-hadoopnodemanager-hadoop01.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager hadoop 45669 0.0 0.0 112660 964 pts/0 S+ 21:39 0:00 grep --color=auto nodemanager
192.168.169.103上
[hadoop@hadoop03 hadoop]$ ps -aux | grep nodemanager hadoop 10808 8.4 6.4 2841680 258220 ? Sl 21:36 0:21 /usr/java/jdk1.8.0_151//bin/java -Dproc_nodemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.log.file=yarn-hadoopnodemanager-hadoop03.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager hadoop 11077 0.0 0.0 112660 968 pts/0 S+ 21:40 0:00 grep --color=auto nodemanager
9 啟動jobhistory(查看job狀態)
[hadoop@hadoop02 sbin]$ ./mr-jobhistory-daemon.sh start historyserver starting historyserver, logging to /hadoop/hadoop-2.8.2/logs/mapred-hadoop-historyserver-hadoop02.out [hadoop@hadoop02 sbin]$
10 查看HDFS信息
[hadoop@hadoop02 bin]$ hdfs dfsadmin -report Configured Capacity: 97679564800 (90.97 GB) Present Capacity: 87752962048 (81.73 GB) DFS Remaining: 87752953856 (81.73 GB) DFS Used: 8192 (8 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 Pending deletion blocks: 0 ------------------------------------------------- Live datanodes (2): Name: 192.168.169.101:50010 (hadoop01) Hostname: hadoop01 Decommission Status : Normal Configured Capacity: 48839782400 (45.49 GB) DFS Used: 4096 (4 KB) Non DFS Used: 4984066048 (4.64 GB) DFS Remaining: 43855712256 (40.84 GB) DFS Used%: 0.00% DFS Remaining%: 89.80% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Sun Nov 05 22:22:53 CST 2017 Name: 192.168.169.103:50010 (hadoop03) Hostname: hadoop03 Decommission Status : Normal Configured Capacity: 48839782400 (45.49 GB) DFS Used: 4096 (4 KB) Non DFS Used: 4942536704 (4.60 GB) DFS Remaining: 43897241600 (40.88 GB) DFS Used%: 0.00% DFS Remaining%: 89.88% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Sun Nov 05 22:22:53 CST 2017
如展示結果如下所示:
[hadoop@hadoop02 hadoop]$ hdfs dfsadmin -report Configured Capacity: 0 (0 B) Present Capacity: 0 (0 B) DFS Remaining: 0 (0 B) DFS Used: 0 (0 B) DFS Used%: NaN% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 Pending deletion blocks: 0
問題可能出在2個地方
1 core-site.xml的fs.default.name配置不對;
2 防火牆沒有關閉
查看文件塊
[hadoop@hadoop02 bin]$ hdfs fsck / -files -blocks Connecting to namenode via http://hadoop02:50070/fsck?ugi=hadoop&files=1&blocks=1&path=%2F FSCK started by hadoop (auth:SIMPLE) from /192.168.169.102 for path / at Sun Nov 05 22:25:18 CST 2017 / <dir> /tmp <dir> /tmp/hadoop-yarn <dir> /tmp/hadoop-yarn/staging <dir> /tmp/hadoop-yarn/staging/history <dir> /tmp/hadoop-yarn/staging/history/done <dir> /tmp/hadoop-yarn/staging/history/done_intermediate <dir> Status: HEALTHY Total size: 0 B Total dirs: 7 Total files: 0 Total symlinks: 0 Total blocks (validated): 0 Minimally replicated blocks: 0 Over-replicated blocks: 0 Under-replicated blocks: 0 Mis-replicated blocks: 0 Default replication factor: 2 Average block replication: 0.0 Corrupt blocks: 0 Missing replicas: 0 Number of data-nodes: 2 Number of racks: 1 FSCK ended at Sun Nov 05 22:25:18 CST 2017 in 6 milliseconds The filesystem under path '/' is HEALTHY
web查看FDFS:
http://192.168.169.102:50070
web查看集群
http://192.168.169.102:8088