註:此文章所寫內容完全在虛擬機配置,系統:centos,jdk和hadoop已經安裝完成所配集群包括hadoop01,hadoop02,hadoop03,hadoop04四台,ip分別為:192.168.80.101,192.168.80.102,192.168.80.103,192.168.80. ...
註:此文章所寫內容完全在虛擬機配置,系統:centos,jdk和hadoop已經安裝完成所配集群包括hadoop01,hadoop02,hadoop03,hadoop04四台,ip分別為:192.168.80.101,192.168.80.102,192.168.80.103,192.168.80.104。
1、現在有一臺虛擬機,主機名:hadoop01,修改主機hosts
1、更改 /etc/hosts // 添加IP和主機名的映射
127.0.0.1 localhost
192.168.80.101 hadoop01
192.168.80.102 hadoop02
192.168.80.103 hadoop03
192.168.80.104 hadoop04
2、更改主機名:/etc/hostname /etc/sysconfig/network
2、配置完全分散式(${HADOOP_HOME}/etc/hadoop)
[core-site.xml] // 在<value>中寫 namenode 主機名 映射ip <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01/</value> </property> </configuration> [hdfs-site.xml] //在<value>中為datenode主機數量 <configuration> <property> <name>dfs.replication</name> <value>3</value> </property> </configuration> [mapred-site.xml] 註意:cp mapred-site.xml.template mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> [yarn-site.xml] // 在第一個<value>內填寫namenode主機名 映射ip <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop01</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
// 配置 slaves 文件 datenode節點
hadoop02
hadoop03
hadoop04
3、配置完全分散式
1、克隆三台
2、克隆過後修改hostname和ip地址文件
[/etc/hostname] [/etc/sysconfig/network]
hadoop02
修改ip
[/etc/sysconfig/network-scripts/ifcfg-ethxxx]
IPADDR = 192.168.80.102
因為是克隆的所以刪除UUID和MAC地址
然後 rm -f /etc/udev/rules.d/70-persistent-net.rules 文件
3、重啟網路
sudo service network restart
4、修改/etc/resolv.conf文件
nameserver 192.169.80.2
5、重覆以上過程2~4
4、準備完全分散式主機的ssh
1、刪除主機上的/home/hadoop/.ssh/*
2、在hadoop01主機上生成密匙對
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
3、將hadoop01的公鑰文件id_rsa.pub遠程複製到hadoop01~hadoop04主機上
並放置到/etc/hadoop/.ssh/authorized_keys
$>scp id_rsa.pub hadoop@hadoop01:/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub hadoop@hadoop02/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub hadoop@hadoop03:/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub hadoop@hadoop04:/home/centos/.ssh/authorized_keys若系統沒有scp命令:安裝scp
yum -y install openssh-clients
另外:記住更改 .ssh文件夾為 hadoop:hadoop用戶4、ssh hadoop01
ssh hadoop02
ssh hadoop03
ssh hadoop04
測試登陸
5、格式化文件系統
1、格式化文件系統之前先刪除臨時文件目錄文件
cd /tmp
rm -rf hadoop-hadoop
ssh hadoop02 rm -rf /tmp/hadoop-hadoop
....
2、刪除hadoop日誌文件
cd /soft/hadoop/logs
rm -rf *
ssh hadoop02 rm -rf /soft/hadoop/logs/*
....
3、格式化文件系統
hadoop namenode -format
4、啟動hadoop進程
start-all.sh
6、jps查看進程
登陸192.168.80.101:50070查看節點信息