搭建完《hadoop偽分散式平臺》後就開始搭建hbase偽分散式平臺了。有了hadoop環境,搭建hbase就變得很容易了。 一、Hbase安裝 1、從官網下載最新版本Hbase安裝包1.2.3,為了省去編譯安裝環節,我直接下載了hbase-1.2.3-bin.tar.gz,解壓即可使用。(如果此鏈 ...
搭建完《hadoop偽分散式平臺》後就開始搭建hbase偽分散式平臺了。有了hadoop環境,搭建hbase就變得很容易了。
一、Hbase安裝
1、從官網下載最新版本Hbase安裝包1.2.3,為了省去編譯安裝環節,我直接下載了hbase-1.2.3-bin.tar.gz,解壓即可使用。(如果此鏈接下載速度過慢可更換官網其他下載鏈接)
[hadoop@master tar]$ tar -xzf hbase-1.2.3-bin.tar.gz [hadoop@master tar]$ mv hbase-1.2.3 /usr/local/hadoop/hbase [hadoop@master tar]$ cd /usr/local/hadoop/hbase/ [hadoop@master hbase]$ ./bin/hbase version HBase 1.2.3 Source code repository git://kalashnikov.att.net/Users/stack/checkouts/hbase.git.commit revision=bd63744624a26dc3350137b564fe746df7a721a4 Compiled by stack on Mon Aug 29 15:13:42 PDT 2016 From source with checksum 0ca49367ef6c3a680888bbc4f1485d18
運行上面命令得到正常輸出即表示安裝成功,然後配置環境變數
2、配置環境變數
修改~/.bashrc在PATH後面增加
:$HADOOP_HOME/hbase/bin
則~/.bashrc文件內容如下
export HADOOP_HOME=/usr/local/hadoop export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$HADOOP_HOME/hbase/bin export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
[hadoop@master hadoop]$ source ~/.bashrc
二、Hbase單機模式
1、修改配置文件 hbase/conf/hbase-env.sh
# export JAVA_HOME=/usr/java/jdk1.6.0/ 修改為 export JAVA_HOME=/usr/local/java/ #export HBASE_MANAGES_ZK=true 修改為 export HBASE_MANAGES_ZK=true # 添加下麵一行 export HBASE_SSH_OPTS="-p 322"
2、修改配置文件 hbase/conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:/usr/local/hadoop/tmp/hbase/hbase-tmp</value>
</property>
</configuration>
3、啟動 Hbase
[hadoop@master hbase]$ start-hbase.sh starting master, logging to /usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-master-master.out Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
jps下多了一個HMaster進程
[hadoop@master hbase]$ jps 12178 ResourceManager 11540 NameNode 4277 Jps 11943 SecondaryNameNode 12312 NodeManager 11707 DataNode 3933 HMaster
4、使用Hbase shell
[hadoop@master hbase]$ hbase shell 2016-11-07 10:11:02,187 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016 hbase(main):001:0> status 1 active master, 0 backup masters, 1 servers, 0 dead, 2.0000 average load hbase(main):002:0> exit
未啟動Hbase直接使用Hbase Shell會報錯
5、停止Hbase
[hadoop@master hbase]$ stop-hbase.sh stopping hbase......................
三、Hbase偽分散式
偽分散式和單機模式的區別主要是配置文件的不同
1、修改配置文件 hbase/conf/hbase-env.sh
# export JAVA_HOME=/usr/java/jdk1.6.0/ 修改為 export JAVA_HOME=/usr/local/java/ # export HBASE_MANAGES_ZK=true 修改為 export HBASE_MANAGES_ZK=true # export HBASE_CLASSPATH= 修改為 export HBASE_CLASSPATH=/usr/local/hadoop/etc/hadoop/ # 添加下麵一行 export HBASE_SSH_OPTS="-p 322"
zookeeper使用Hbase自帶的即可,分散式才有必要開啟獨立的
2、修改配置文件 hbase/conf/hbase-site.xml
<configuration> <property> <name>hbase.rootdir</name> <value>hdfs://10.1.2.108:9000/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration>
註意這裡的hbase.rootdir設置為hdfs的存儲路徑前提是hadoop平臺是偽分散式,只有一個NameNode
3、啟動Hbase
[hadoop@master hbase]$ start-hbase.sh localhost: starting zookeeper, logging to /usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-zookeeper-master.out master running as process 3933. Stop it first. starting regionserver, logging to /usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-1-regionserver-master.out
jps查看進程多了 HMaster和 HRegionServer
[hadoop@master hbase]$ jps 7312 Jps 12178 ResourceManager 11540 NameNode 11943 SecondaryNameNode 12312 NodeManager 11707 DataNode 3933 HMaster 7151 HRegionServer
4、使用Hbase Shell
[hadoop@master hbase]$ hbase shell 2016-11-07 10:35:05,262 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016
1) 查看集群狀態和版本信息
hbase(main):001:0> status 1 active master, 0 backup masters, 1 servers, 0 dead, 1.0000 average load hbase(main):002:0> version 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016
2) 創建user表和三個列族
hbase(main):003:0> create 'user','user_id','address','info' 0 row(s) in 2.3570 seconds => Hbase::Table - user
3) 查看所有表
hbase(main):005:0> create 'tmp', 't1', 't2' 0 row(s) in 1.2320 seconds => Hbase::Table - tmp
hbase(main):006:0> list TABLE tmp user 2 row(s) in 0.0100 seconds => ["tmp", "user"] hbase(main):007:0>
4) 查看表結構
hbase(main):008:0> describe 'user' Table user is ENABLED user COLUMN FAMILIES DESCRIPTION {NAME => 'address', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_V ERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} {NAME => 'info', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERS IONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} {NAME => 'user_id', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_V ERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 3 row(s) in 0.2060 seconds hbase(main):009:0>
5) 刪除表
hbase(main):010:0> disable 'tmp' 0 row(s) in 2.2580 seconds hbase(main):011:0> drop 'tmp' 0 row(s) in 1.2560 seconds hbase(main):012:0>
5、停止Hbase
[hadoop@master hbase]$ stop-hbase.sh stopping hbase...................... localhost: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid
停止Hadoop的順序是停止hbase、停止YARN、停止Hdfs
6、web使用
可通過Hdfs頁面 http://10.1.2.108:50070進入Hbase頁面
或者直接訪問 http://10.1.2.108:60010/master.jsp
原創文章,轉載請備註原文地址 http://www.cnblogs.com/lxmhhy/p/6026047.html
知識交流討論請加qq群:180214441。謝謝合作