1. 官網下載 wget http://apache.fayea.com/hadoop/common/hadoop-3.0.0-alpha1/hadoop-3.0.0-alpha1.tar.gz 2. 解壓 tar -zxvf hadoop-3.0.0-alpha1.tar.gz ln -s had ...
1. 官網下載
wget http://apache.fayea.com/hadoop/common/hadoop-3.0.0-alpha1/hadoop-3.0.0-alpha1.tar.gz
2. 解壓
tar -zxvf hadoop-3.0.0-alpha1.tar.gz
ln -s hadoop-3.0.0-alpha1 hadoop3
3. 環境變數
vi /etc/profile
#Hadoop 3.0
export HADOOP_HOME=/usr/local/hadoop3
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile
註意:/usr/local/hadoop3為解壓路徑
4. 配置文件(hadoop3/etc/hadoop)
1) hadoop-env.sh
註意: /usr/java/jdk1.8為jdk安裝路徑
2)core-site.xml
註意:hdfs://ha01:9000 中ha01是主機名
3) hdfs-site.xml
4) yarn-site.xml
註意:ha01是主機名
5) mapred-site.xml
註意:如果不配置mapreduce.admin.user.env 和 yarn.app.mapreduce.am.env這兩個參數mapreduce運行時有可能報錯。
6)workers
這裡跟以前的slaves配置文件類似,寫上從節點的主機名
5. 複製到其他節點
scp -r hadoop-3.0.0-alpha1 root@ha02:/usr/local
scp -r hadoop-3.0.0-alpha1 root@ha03:/usr/local
6. 格式化
hdfs namenode -format
7. 啟動
start-dfs.sh
start-yarn.sh
hdfs dfs -ls /
8. 運行wordcount
hdfs dfs -cat /logs/wordcount.log
hadoop jar /usr/local/hadoop3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-alpha1.jar wordcount /logs/wordcount.log /output/wordcount
註意: /usr/local/hadoop3是解壓路徑,/logs/wordcount.log是輸入文件路徑,/output/wordcount運行結果路徑
查看運行結果: hdfs dfs -text /output/wordcount/part-r-00000
9. Web界面
http://ha01:9870/
http://ha01:8088/