一、備份namenode的元數據 namenode中的元數據非常重要,如丟失或者損壞,則整個系統無法使用。因此應該經常對元數據進行備份,最好是異地備份。 1、將元數據複製到遠程站點 (1)以下代碼將secondary namenode中的元數據複製到一個時間命名的目錄下,然後通過scp命令遠程發送到 ...
一、備份namenode的元數據
namenode中的元數據非常重要,如丟失或者損壞,則整個系統無法使用。因此應該經常對元數據進行備份,最好是異地備份。
1、將元數據複製到遠程站點
(1)以下代碼將secondary namenode中的元數據複製到一個時間命名的目錄下,然後通過scp命令遠程發送到其它機器
#!/bin/bash export dirname=/mnt/tmphadoop/dfs/namesecondary/current/`date +%y%m%d%H` if [ ! -d ${dirname} ] then mkdir ${dirname} cp /mnt/tmphadoop/dfs/namesecondary/current/* ${dirname} fi scp -r ${dirname} slave1:/mnt/namenode_backup/ rm -r ${dirname}
(2)配置crontab,定時執行此項工作
0 0,8,14,20 * * * bash /mnt/scripts/namenode_backup_script.sh
2、在遠程站點中啟動一個本地namenode守護進程,嘗試載入這些備份文件,以確定是否已經進行了正確備份。
二、數據備份
對於重要的數據,不能完全依賴HDFS,而是需要進行備份,註意以下幾點
(1)儘量異地備份
(2)如果使用distcp備份至另一個hdfs集群,則不要使用同一版本的hadoop,避免hadoop自身導致數據出錯。
三、文件系統檢查
定期在整個文件系統上運行HDFS的fsck工具,主動查找丟失或者損壞的塊。
建議每天執行一次。
[jediael@master ~]$ hadoop fsck / ……省略輸出(若有錯誤,則在此外出現,否則只會出現點,一個點表示一個文件)…… .........Status: HEALTHY Total size: 14466494870 B Total dirs: 502 Total files: 1592 (Files currently being written: 2) Total blocks (validated): 1725 (avg. block size 8386373 B) Minimally replicated blocks: 1725 (100.0 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 648 (37.565216 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 2 Average block replication: 2.0 Corrupt blocks: 0 Missing replicas: 760 (22.028986 %) Number of data-nodes: 2 Number of racks: 1 FSCK ended at Sun Mar 01 20:17:57 CST 2015 in 608 milliseconds The filesystem under path '/' is HEALTHY 上海尚學堂 shsxt.com
(1)若hdfs-site.xml中的dfs.replication設置為3,而實現上只有2個datanode,則在執行fsck時會出現以下錯誤;
/hbase/Mar0109_webpage/59ad1be6884739c29d0624d1d31a56d9/il/43e6cd4dc61b49e2a57adf0c63921c09: Under replicated blk_-4711857142889323098_6221. Target Replicas is 3 but found 2 replica(s).
註意,由於原來的dfs.replication為3,後來下線了一臺datanode,並將dfs.replication改為2,但原來已創建的文件也會記錄dfs.replication為3,從而出現以上錯誤,並導致 Under-replicated blocks: 648 (37.565216 %)。
(2)fsck工具還可以用來檢查一個文件包括哪些塊,以及這些塊分別在哪等
[jediael@master conf]$ hadoop fsck /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 -files -blocks -racks 2. 3. FSCK started by jediael from /10.171.29.191 for path /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 at Sun Mar 01 20:39:35 CST 2015 4. /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 21507169 bytes, 1 block(s): Under replicated blk_7117944555454804881_3655. Target Replicas is 3 but found replica(s). 0. blk_7117944555454804881_3655 len=21507169 repl=2 [/default-rack/10.171.94.155:50010, /default-rack/10.251.0.197:50010] Status: HEALTHY Total size: 21507169 B Total dirs: 0 Total files: 1 Total blocks (validated): 1 (avg. block size 21507169 B) Minimally replicated blocks: 1 (100.0 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 1 (100.0 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 2 Average block replication: 2.0 Corrupt blocks: 0 Missing replicas: 1 (50.0 %) Number of data-nodes: 2 Number of racks: 1 FSCK ended at Sun Mar 01 20:39:35 CST 2015 in 0 milliseconds The filesystem under path '/hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7' is HEALTHY
此命令的用法如下:
[jediael@master ~]$ hadoop fsck -files Usage: DFSck [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]] start checking from this path -move move corrupted files to /lost+found -delete delete corrupted files -files print out files being checked -openforwrite print out files opened for write -blocks print out block report -locations print out locations for every block -racks print out network topology for data-node locations By default fsck ignores files opened for write, use -openforwrite to report such files. They are usually tagged CORRUPT or HEALTHY depending on their block allocation status Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|jobtracker:port> specify a job tracker -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions]
詳細解釋請見《hadoop權威指南》P376
上海尚學堂大數據培訓課程之Hadoop,獲取相關學習資料教程請評論留言。上海尚學堂大數據培訓班即將開班,歡迎預定免費試聽名額。