RHEL7.2 安裝Hadoop-2.8.2

来源:http://www.cnblogs.com/ccskun/archive/2017/11/06/7795479.html
-Advertisement-
Play Games

創建三台虛擬機,IP地址為:192.168.169.101,192.168.169.102,192.168.169.103 將192.168.169.102為namenode,192.168.169.101,192.168.169.103為datanode 關閉防火牆,安裝JDK1.8,設置SSH無 ...


創建三台虛擬機,IP地址為:192.168.169.101,192.168.169.102,192.168.169.103

將192.168.169.102為namenode,192.168.169.101,192.168.169.103為datanode

關閉防火牆,安裝JDK1.8,設置SSH無密碼登錄,下載Hadoop-2.8.2.tar.gz到/hadoop目錄下。

1 安裝namenode結點

   將hadoop-2.8.2.tar.gz解壓到192.168.169.102的hadoop用戶的home目錄/hadoop下

[hadoop@hadoop02 ~]$ pwd
/hadoop
[hadoop@hadoop02 ~]$ tar zxvf hadoop-2.8.2.tar.gz 
... ...
[hadoop@hadoop02 ~]$ cd hadoop-2.8.2/
[hadoop@hadoop02 hadoop-2.8.2]$ pwd
/hadoop/hadoop-2.8.2
[hadoop@hadoop02 hadoop-2.8.2]$ ls -l
總用量 132
drwxr-xr-x 2 hadoop hadoop  4096 10月 20 05:11 bin
drwxr-xr-x 3 hadoop hadoop    19 10月 20 05:11 etc
drwxr-xr-x 2 hadoop hadoop   101 10月 20 05:11 include
drwxr-xr-x 3 hadoop hadoop    19 10月 20 05:11 lib
drwxr-xr-x 2 hadoop hadoop  4096 10月 20 05:11 libexec
-rw-r--r-- 1 hadoop hadoop 99253 10月 20 05:11 LICENSE.txt
-rw-r--r-- 1 hadoop hadoop 15915 10月 20 05:11 NOTICE.txt
-rw-r--r-- 1 hadoop hadoop  1366 10月 20 05:11 README.txt
drwxr-xr-x 2 hadoop hadoop  4096 10月 20 05:11 sbin
drwxr-xr-x 4 hadoop hadoop    29 10月 20 05:11 share
[hadoop@hadoop02 hadoop-2.8.2]$ 

 2 配置Hadoop環境變數

[hadoop@hadoop02 bin]$ vi /hadoop/.bash_profile
export HADOOP_HOME=/hadoop/hadoop-2.8.2
export PATH=$PATH:$HADOOP_HOME/bin

 註意:另兩台虛擬機也要同樣配置。

 執行source ~./.bash_profile使配置生效,並驗證:

[hadoop@hadoop02 bin]$ source ~/.bash_profile
[hadoop@hadoop02 bin]$ echo $HADOOP_HOME
/hadoop/hadoop-2.8.2
[hadoop@hadoop02 bin]$ echo $PATH
/usr/java/jdk1.8.0_151/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/hadoop/.local/bin:/hadoop/bin:/hadoop/.local/bin:/hadoop/bin:/hadoop/hadoop-2.8.2/bin
[hadoop@hadoop02 bin]$ 

 3 創建hadoop工作目錄

[hadoop@hadoop02 bin]$ mkdir -p /hadoop/hadoop/dfs/name /hadoop/hadoop/dfs/data /hadoop/hadoop/tmp

 4 修改hadoop配製文件

    共修改7個配製文件:

    hadoop-env.sh: java環境變數
    yarn-env.sh:  制定yarn框架的java運行環境,yarn它將資源管理和處理組件分開。基於yarn的架構不受MapReduce約束。
    slaves: 指定datanode數據存儲伺服器
    core-site.xml:  指定訪問hadoop web界面的路徑
    hdfs-site.xml:  文件系統的配置文件
    mapred-site.xml:  MapReducer任務配置文件
    yarn-site.xml: yarn框架配置,主要是一些任務的啟動位置

4.1 /hadoop/hadoop-2.8.2/etc/hadoop/hadoop-env.sh

[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_151/

 4.2 /hadoop/hadoop-2.8.2/etc/hadoop/yarn-env.sh

[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/yarn-env.sh
JAVA_HOME=/usr/java/jdk1.8.0_151/

 4.3 /hadoop/hadoop-2.8.2/etc/hadoop/slaves

[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/slaves
hadoop01
hadoop03

 4.4 /hadoop/hadoop-2.8.2/etc/hadoop/core-site.xml

[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/core-site.xml
<configuration>
    <property>   
        <name>hadoop.tmp.dir</name>   
        <value>/hadoop/hadoop/tmp</value>   //手工創建的
        <final>true</final>  
        <description>A base for other temporary directories.</description>   
    </property>   
    <property>   
        <name>fs.default.name</name>   
        <value>hdfs://192.168.169.102:9000</value>  
        <final>true</final>   
    </property>   
    <property>    
         <name>io.file.buffer.size</name>    
         <value>131072</value>    
    </property>
</configuration>

 4.5 /hadoop/hadoop-2.8.2/etc/hadoop/hdfs-site.xml

[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/hdfs-site.xml
        <property>   
            <name>dfs.replication</name>   
            <value>2</value>   
        </property>   
        <property>   
            <name>dfs.name.dir</name>   
            <value>/hadoop/hadoop/dfs/name</value>   
        </property>   
        <property>   
            <name>dfs.data.dir</name>   
            <value>/hadoop/hadoop/dfs/data</value>   
        </property>   
        <property>    
             <name>dfs.namenode.secondary.http-address</name>    
             <value>hadoop02:9001</value>    
        </property>    
        <property>    
             <name>dfs.webhdfs.enabled</name>    
             <value>true</value>    
        </property>    
        <property>    
             <name>dfs.permissions</name>    
             <value>false</value>    
        </property>  

 4.6 /hadoop/hadoop-2.8.2/etc/hadoop/mapred-queues.xml

[hadoop@hadoop02 hadoop]$ cp mapred-site.xml.template mapred-site.xml
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/mapred-site.xml
        <property>    
              <name>mapreduce.framework.name</name>    
              <value>yarn</value>    
        </property>
        <property>
              <name>mapreduce.jobhistory.address</name>
              <value>hadoop02:10020</value>
        </property>
        <property>
              <name>mapreduce.jobhistory.webapp.address</name>
              <value>hadoop02:19888</value>
        </property>

 4.7 /hadoop/hadoop-2.8.2/etc/hadoop/yarn-site.xml

    <property>
        <name>yarn.nodemanager.aux-service</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-service.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>hadoop02:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>hadoop02:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>hadoop02:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>hadoop02:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>hadoop02:8088</value>
    </property>

 5 安裝datanode結點

    在192.168.169.102上

[hadoop@hadoop02 ~]$ scp -rp hadoop-2.8.2 hadoop@hadoop01:~/
[hadoop@hadoop02 ~]$ scp -rp hadoop-2.8.2 hadoop@hadoop03:~/

 6 初始化namenode

[hadoop@hadoop02 ~]$ pwd
/hadoop
[hadoop@hadoop02 ~]$ ./hadoop-2.8.2/bin/hdfs namenode -format
17/11/05 21:10:43 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   user = hadoop
STARTUP_MSG:   host = hadoop02/192.168.169.102
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.8.2
STARTUP_MSG:   classpath = /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/activation-1.1.jar:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/hadoop/hadoop-
......
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 66c47f2a01ad9637879e95f80c41f798373828fb; compiled by 'jdu' on 2017-10-19T20:39Z
STARTUP_MSG:   java = 1.8.0_151
************************************************************/
17/11/05 21:10:43 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/11/05 21:10:43 INFO namenode.NameNode: createNameNode [-format]
17/11/05 21:10:43 WARN common.Util: Path /hadoop/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
17/11/05 21:10:43 WARN common.Util: Path /hadoop/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-206dbc0f-21a2-4c5e-bad1-c296ed9f705a
17/11/05 21:10:44 INFO namenode.FSEditLog: Edit logging is async:false
17/11/05 21:10:44 INFO namenode.FSNamesystem: KeyProvider: null
17/11/05 21:10:44 INFO namenode.FSNamesystem: fsLock is fair: true
17/11/05 21:10:44 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
17/11/05 21:10:44 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/11/05 21:10:44 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/11/05 21:10:44 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/11/05 21:10:44 INFO blockmanagement.BlockManager: The block deletion will start around 2017 十一月 05 21:10:44
17/11/05 21:10:44 INFO util.GSet: Computing capacity for map BlocksMap
17/11/05 21:10:44 INFO util.GSet: VM type       = 64-bit
17/11/05 21:10:44 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/11/05 21:10:44 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/11/05 21:10:44 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/11/05 21:10:44 INFO blockmanagement.BlockManager: defaultReplication         = 2
17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxReplication             = 512
17/11/05 21:10:44 INFO blockmanagement.BlockManager: minReplication             = 1
17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/11/05 21:10:44 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/11/05 21:10:44 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/11/05 21:10:44 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
17/11/05 21:10:44 INFO namenode.FSNamesystem: supergroup          = supergroup
17/11/05 21:10:44 INFO namenode.FSNamesystem: isPermissionEnabled = false
17/11/05 21:10:44 INFO namenode.FSNamesystem: HA Enabled: false
17/11/05 21:10:44 INFO namenode.FSNamesystem: Append Enabled: true
17/11/05 21:10:45 INFO util.GSet: Computing capacity for map INodeMap
17/11/05 21:10:45 INFO util.GSet: VM type       = 64-bit
17/11/05 21:10:45 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/11/05 21:10:45 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/11/05 21:10:45 INFO namenode.FSDirectory: ACLs enabled? false
17/11/05 21:10:45 INFO namenode.FSDirectory: XAttrs enabled? true
17/11/05 21:10:45 INFO namenode.NameNode: Caching file names occurring more than 10 times
17/11/05 21:10:45 INFO util.GSet: Computing capacity for map cachedBlocks
17/11/05 21:10:45 INFO util.GSet: VM type       = 64-bit
17/11/05 21:10:45 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/11/05 21:10:45 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
17/11/05 21:10:45 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/11/05 21:10:45 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/11/05 21:10:45 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/11/05 21:10:45 INFO util.GSet: VM type       = 64-bit
17/11/05 21:10:45 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/11/05 21:10:45 INFO util.GSet: capacity      = 2^15 = 32768 entries
17/11/05 21:10:45 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1476203169-192.168.169.102-1509887445494
17/11/05 21:10:45 INFO common.Storage: Storage directory /hadoop/hadoop/dfs/name has been successfully formatted.
17/11/05 21:10:45 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/11/05 21:10:45 INFO namenode.FSImageFormatProtobuf: Image file /hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
17/11/05 21:10:45 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/05 21:10:45 INFO util.ExitUtil: Exiting with status 0
17/11/05 21:10:45 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop02/192.168.169.102
************************************************************/
[hadoop@hadoop02 ~]$ 

 驗證

[hadoop@hadoop02 ~]$ cd /hadoop/hadoop/dfs/name/current
[hadoop@hadoop02 current]$ pwd
/hadoop/hadoop/dfs/name/current
[hadoop@hadoop02 current]$ ls
fsimage_0000000000000000000  fsimage_0000000000000000000.md5  seen_txid  VERSION
[hadoop@hadoop02 current]$

7 啟動HDSF

[hadoop@hadoop02 sbin]$ pwd
/hadoop/hadoop-2.8.2/sbin
[hadoop@hadoop02 sbin]$ ./start-dfs.sh
Starting namenodes on [hadoop02]
The authenticity of host 'hadoop02 (192.168.169.102)' can't be established.
ECDSA key fingerprint is f7:ef:fb:e5:7e:0f:59:40:63:23:99:9a:ca:e2:03:e8.
Are you sure you want to continue connecting (yes/no)? yes
hadoop02: Warning: Permanently added 'hadoop02,192.168.169.102' (ECDSA) to the list of known hosts.
hadoop02: starting namenode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-namenode-hadoop02.out
hadoop03: starting datanode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-datanode-hadoop03.out
hadoop01: starting datanode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-datanode-hadoop01.out
Starting secondary namenodes [hadoop02]
hadoop02: starting secondarynamenode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-secondarynamenode-hadoop02.out
[hadoop@hadoop02 sbin]$ 

 驗證

192.168.169.102上

[hadoop@hadoop02 sbin]$ ps -aux | grep namenode
hadoop    13502  3.0  6.2 2820308 241808 ?      Sl   21:18   0:09 /usr/java/jdk1.8.0_151//bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-namenode-hadoop02.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode
hadoop    13849  2.1  4.5 2784012 174604 ?      Sl   21:18   0:06 /usr/java/jdk1.8.0_151//bin/java -Dproc_secondarynamenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-secondarynamenode-hadoop02.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
hadoop    14264  0.0  0.0 112660   968 pts/1    S+   21:23   0:00 grep --color=auto namenode

 192.168.169.101上

[hadoop@hadoop01 hadoop]$ ps -aux | grep datanode
hadoop    45401 24.5  4.0 2811244 165268 ?      Sl   21:31   0:10 /usr/java/jdk1.8.0_151//bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-datanode-hadoop01.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
hadoop    45479  0.0  0.0 112660   968 pts/0    S+   21:32   0:00 grep --color=auto datanode

 192.168.169.103上

[hadoop@hadoop03 hadoop]$ ps -aux | grep datanode
hadoop    10608  7.4  3.9 2806140 158464 ?      Sl   21:31   0:08 /usr/java/jdk1.8.0_151//bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-datanode-hadoop03.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
hadoop    10757  0.0  0.0 112660   968 pts/0    S+   21:33   0:00 grep --color=auto datanode

 8 啟動yarn

[hadoop@hadoop02 sbin]$ ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-resourcemanager-hadoop02.out
hadoop01: starting nodemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-nodemanager-hadoop01.out
hadoop03: starting nodemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-nodemanager-hadoop03.out

 驗證

192.168.169.102上

[hadoop@hadoop02 sbin]$ ps -aux | grep resourcemanage
hadoop    16256 21.6  7.1 2991540 277336 pts/1  Sl   21:36   0:22 /usr/java/jdk1.8.0_151//bin/java -Dproc_resourcemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
hadoop    16541  0.0  0.0 112660   972 pts/1    S+   21:38   0:00 grep --color=auto resourcemanage

 192.168.169.101上

[hadoop@hadoop01 hadoop]$ ps -aux | grep nodemanager
hadoop    45543 10.9  6.6 2847708 267304 ?      Sl   21:36   0:18 /usr/java/jdk1.8.0_151//bin/java -Dproc_nodemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.log.file=yarn-hadoopnodemanager-hadoop01.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager
hadoop    45669  0.0  0.0 112660   964 pts/0    S+   21:39   0:00 grep --color=auto nodemanager

 192.168.169.103上

[hadoop@hadoop03 hadoop]$ ps -aux | grep nodemanager
hadoop    10808  8.4  6.4 2841680 258220 ?      Sl   21:36   0:21 /usr/java/jdk1.8.0_151//bin/java -Dproc_nodemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.log.file=yarn-hadoopnodemanager-hadoop03.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager
hadoop    11077  0.0  0.0 112660   968 pts/0    S+   21:40   0:00 grep --color=auto nodemanager

 9 啟動jobhistory(查看job狀態)

[hadoop@hadoop02 sbin]$ ./mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /hadoop/hadoop-2.8.2/logs/mapred-hadoop-historyserver-hadoop02.out
[hadoop@hadoop02 sbin]$

 10 查看HDFS信息

[hadoop@hadoop02 bin]$ hdfs dfsadmin -report
Configured Capacity: 97679564800 (90.97 GB)
Present Capacity: 87752962048 (81.73 GB)
DFS Remaining: 87752953856 (81.73 GB)
DFS Used: 8192 (8 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (2):

Name: 192.168.169.101:50010 (hadoop01)
Hostname: hadoop01
Decommission Status : Normal
Configured Capacity: 48839782400 (45.49 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 4984066048 (4.64 GB)
DFS Remaining: 43855712256 (40.84 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.80%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Nov 05 22:22:53 CST 2017


Name: 192.168.169.103:50010 (hadoop03)
Hostname: hadoop03
Decommission Status : Normal
Configured Capacity: 48839782400 (45.49 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 4942536704 (4.60 GB)
DFS Remaining: 43897241600 (40.88 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.88%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Nov 05 22:22:53 CST 2017

 如展示結果如下所示:

[hadoop@hadoop02 hadoop]$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0

 問題可能出在2個地方
1 core-site.xml的fs.default.name配置不對;
2 防火牆沒有關閉

查看文件塊

[hadoop@hadoop02 bin]$ hdfs fsck / -files -blocks
Connecting to namenode via http://hadoop02:50070/fsck?ugi=hadoop&files=1&blocks=1&path=%2F
FSCK started by hadoop (auth:SIMPLE) from /192.168.169.102 for path / at Sun Nov 05 22:25:18 CST 2017
/ <dir>
/tmp <dir>
/tmp/hadoop-yarn <dir>
/tmp/hadoop-yarn/staging <dir>
/tmp/hadoop-yarn/staging/history <dir>
/tmp/hadoop-yarn/staging/history/done <dir>
/tmp/hadoop-yarn/staging/history/done_intermediate <dir>
Status: HEALTHY
 Total size:	0 B
 Total dirs:	7
 Total files:	0
 Total symlinks:		0
 Total blocks (validated):	0
 Minimally replicated blocks:	0
 Over-replicated blocks:	0
 Under-replicated blocks:	0
 Mis-replicated blocks:		0
 Default replication factor:	2
 Average block replication:	0.0
 Corrupt blocks:		0
 Missing replicas:		0
 Number of data-nodes:		2
 Number of racks:		1
FSCK ended at Sun Nov 05 22:25:18 CST 2017 in 6 milliseconds


The filesystem under path '/' is HEALTHY

 

web查看FDFS:
http://192.168.169.102:50070
web查看集群
http://192.168.169.102:8088

 




您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • 1.如圖所示,要實現一個驗證碼的倒計時的效果 2.實現 圖中獲取驗證碼那塊是一個button按鈕 關鍵部分,聲明一個TimeCount,繼承自CountDownTimer ...
  • 1.cell的view和contentView的區別 1.1 addSubView UITableViewCell實例上添加子視圖,有兩種方式:[cell addSubview:view]或[cell.contentView addSubview:view],一般情況下,兩種方式沒有區別。但是在多選 ...
  • 在我們的頁面中如果存在有ListView,當我們進入這個activity時,頁面會定位到ListView的位置去,而不是activity的頭部,這是由於ListView會去預設獲取焦點所造成的。 解決方法:只需要在整個佈局的根佈局處加上: android:descendantFocusability ...
  • 在開始之前先上一張效果圖 相信大家都看到了“店鋪優惠”這一欄,在這裡假設這一欄就是單獨的一個cell,當無店鋪優惠的時候不可點擊在有店鋪優惠的時候會彈出優惠列表,選中並返回時會刷新數據,所以彈出視圖採用的是懶載入的方式,而且刷新頁面的方式採用的不是 tableView 的 reloadData 來刷 ...
  • Swift 2.0 中,引入了可用性的概念。對於函數,類,協議等,可以使用@available聲明這些類型的生命周期依賴於特定的平臺和操作系統版本。而#available用在判斷語句中(if, guard, while等),在不同的平臺上做不同的邏輯。 @available 用法 @availabl ...
  • Mysql修改已有數據的字元集 問題 在生產環境中跑了很久,發現MysqlClient連接的字元集是預設的latin1,我們一直以為都是utf8,造成這樣的誤解,是因為在內網環境中,我們是源碼編譯的Mysql,並指定了編譯選項字元集位utf8,這時Mysql的是預設字元接都是utf8. 而在外網,我 ...
  • 轉載:https://mp.weixin.qq.com/s/exE_UmZDeeWZKGNhbOFJMQ 作者|張輝清、楊麗編輯|雨多田光 文末有 demo 下載 Redis的使用難嗎?不難,Redis用好容易嗎?不容易。Redis的使用雖然不難,但與業務結合的應用場景特別多、特別緊,用好並不容易。 ...
  • 服務端字元集修改 1、確認服務端字元集 1 select userenv('language') from dual; 2、修改服務端字元集 首先以 DBA 身份登錄 Oracle。Windows 系統下直接在命令行下運行命令 sqlplus /as sysdba 或在 SQL PLUS 下運行命令 ...
一周排行
    -Advertisement-
    Play Games
  • 移動開發(一):使用.NET MAUI開發第一個安卓APP 對於工作多年的C#程式員來說,近來想嘗試開發一款安卓APP,考慮了很久最終選擇使用.NET MAUI這個微軟官方的框架來嘗試體驗開發安卓APP,畢竟是使用Visual Studio開發工具,使用起來也比較的順手,結合微軟官方的教程進行了安卓 ...
  • 前言 QuestPDF 是一個開源 .NET 庫,用於生成 PDF 文檔。使用了C# Fluent API方式可簡化開發、減少錯誤並提高工作效率。利用它可以輕鬆生成 PDF 報告、發票、導出文件等。 項目介紹 QuestPDF 是一個革命性的開源 .NET 庫,它徹底改變了我們生成 PDF 文檔的方 ...
  • 項目地址 項目後端地址: https://github.com/ZyPLJ/ZYTteeHole 項目前端頁面地址: ZyPLJ/TreeHoleVue (github.com) https://github.com/ZyPLJ/TreeHoleVue 目前項目測試訪問地址: http://tree ...
  • 話不多說,直接開乾 一.下載 1.官方鏈接下載: https://www.microsoft.com/zh-cn/sql-server/sql-server-downloads 2.在下載目錄中找到下麵這個小的安裝包 SQL2022-SSEI-Dev.exe,運行開始下載SQL server; 二. ...
  • 前言 隨著物聯網(IoT)技術的迅猛發展,MQTT(消息隊列遙測傳輸)協議憑藉其輕量級和高效性,已成為眾多物聯網應用的首選通信標準。 MQTTnet 作為一個高性能的 .NET 開源庫,為 .NET 平臺上的 MQTT 客戶端與伺服器開發提供了強大的支持。 本文將全面介紹 MQTTnet 的核心功能 ...
  • Serilog支持多種接收器用於日誌存儲,增強器用於添加屬性,LogContext管理動態屬性,支持多種輸出格式包括純文本、JSON及ExpressionTemplate。還提供了自定義格式化選項,適用於不同需求。 ...
  • 目錄簡介獲取 HTML 文檔解析 HTML 文檔測試參考文章 簡介 動態內容網站使用 JavaScript 腳本動態檢索和渲染數據,爬取信息時需要模擬瀏覽器行為,否則獲取到的源碼基本是空的。 本文使用的爬取步驟如下: 使用 Selenium 獲取渲染後的 HTML 文檔 使用 HtmlAgility ...
  • 1.前言 什麼是熱更新 游戲或者軟體更新時,無需重新下載客戶端進行安裝,而是在應用程式啟動的情況下,在內部進行資源或者代碼更新 Unity目前常用熱更新解決方案 HybridCLR,Xlua,ILRuntime等 Unity目前常用資源管理解決方案 AssetBundles,Addressable, ...
  • 本文章主要是在C# ASP.NET Core Web API框架實現向手機發送驗證碼簡訊功能。這裡我選擇是一個互億無線簡訊驗證碼平臺,其實像阿裡雲,騰訊雲上面也可以。 首先我們先去 互億無線 https://www.ihuyi.com/api/sms.html 去註冊一個賬號 註冊完成賬號後,它會送 ...
  • 通過以下方式可以高效,並保證數據同步的可靠性 1.API設計 使用RESTful設計,確保API端點明確,並使用適當的HTTP方法(如POST用於創建,PUT用於更新)。 設計清晰的請求和響應模型,以確保客戶端能夠理解預期格式。 2.數據驗證 在伺服器端進行嚴格的數據驗證,確保接收到的數據符合預期格 ...