flume組件以及通過命令監控大數據平臺轉態

来源:https://www.cnblogs.com/liuyaling/archive/2023/04/24/17349385.html
-Advertisement-
Play Games

實驗一、Flume 組件安裝配置 1、下載和解壓 Flume 可 以 從 官 網 下 載 Flume 組 件 安 裝 包 , 下 載 地 址 如 下 URL 鏈 接 所 示 https://archive.apache.org/dist/flume/1.6.0/ [root@master ~]# l ...


實驗一、Flume 組件安裝配置

1、下載和解壓 Flume

可 以 從 官 網 下 載 Flume 組 件 安 裝 包 , 下 載 地 址 如 下 URL 鏈 接 所 示 https://archive.apache.org/dist/flume/1.6.0/

[root@master ~]# ls
anaconda-ks.cfg               jdk-8u152-linux-x64.tar.gz
apache-flume-1.6.0-bin.tar.gz mysql
apache-hive-2.0.0-bin.tar.gz   mysql-connector-java-5.1.46.jar
derby.log                     sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz
hadoop-2.7.1.tar.gz           zookeeper-3.4.8.tar.gz
hbase-1.2.1-bin.tar.gz
#使用 root 用戶解壓 Flume 安裝包到“/usr/local/src”路徑,並修改解壓後文件夾名為 flume。
[root@master ~]# tar xf apache-flume-1.6.0-bin.tar.gz -C /usr/local/src/
[root@master ~]# cd /usr/local/src/
[root@master src]# ls
apache-flume-1.6.0-bin hadoop hbase hive jdk sqoop zookeeper
[root@master src]# mv apache-flume-1.6.0-bin flume
[root@master src]# ls
flume hadoop hbase hive jdk sqoop zookeeper

2、Flume 組件部署

步驟一:使用 root 用戶設置 Flume 環境變數,並使環境變數對所有用戶生效。

[root@master ~]# vim /etc/profile.d/flume.sh
export FLUME_HOME=/usr/local/src/flume
export PATH=${FLUME_HOME}/bin:$PATH

步驟二:修改 Flume 相應配置文件。

首先,切換到 hadoop 用戶,並切換當前工作目錄到 Flume 的配置文件夾。

[root@master ~]# chown -R hadoop.hadoop /usr/local/src/
[root@master ~]# su - hadoop
Last login: Fri Apr 14 16:31:48 CST 2023 on pts/1
[hadoop@master ~]$ ls
derby.log input student.java zookeeper.out
[hadoop@master ~]$ cd /usr/local/src/flume/conf/
[hadoop@master conf]$ ls
flume-conf.properties.template flume-env.sh.template
flume-env.ps1.template         log4j.properties

拷貝 flume-env.sh.template 文件並重命名為 flume-env.sh。

[hadoop@master conf]$ cp flume-env.sh.template flume-env.sh
[hadoop@master conf]$ ls
flume-conf.properties.template flume-env.sh           log4j.properties
flume-env.ps1.template         flume-env.sh.template

步驟三:修改並配置 flume-env.sh 文件。

刪除 JAVA_HOME 變數前的註釋,修改為 JDK 的安裝路徑。

[hadoop@master conf]$ vi flume-env.sh
export JAVA_HOME=/usr/loocal/src/jdk
#export HBASE_CLASSPATH=/usr/local/src/hadoop/etc/hadoop

使用 flume-ng version 命令驗證安裝是否成功,若能夠正常查詢 Flume 組件版本為 1.6.0,則表示安裝成功。

[hadoop@master conf]$ flume-ng version
Error: Could not find or load main class org.apache.flume.tools.GetJavaProperty
Flume 1.6.0
Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
Revision: 2561a23240a71ba20bf288c7c2cda88f443c2080
Compiled by hshreedharan on Mon May 11 11:15:44 PDT 2015
From source with checksum b29e416802ce9ece3269d34233baf43f

3、使用 Flume 發送和接受信息

通過 Flume 將 Web 伺服器中數據傳輸到 HDFS 中。

步驟一:在 Flume 安裝目錄中創建 xxx.conf 文件。

[hadoop@master flume]$ vi xxx.conf
a1.sources=r1
a1.sinks=k1
a1.channels=c1
a1.sources.r1.type=spooldir
a1.sources.r1.spoolDir=/usr/local/src/hadoop/logs
a1.sources.r1.fileHeader=true
a1.sinks.k1.type=hdfs
a1.sinks.k1.hdfs.path=hdfs://master:9000/tmp/flume
a1.sinks.k1.hdfs.rollsize=1048760
a1.sinks.k1.hdfs.rollCount=0
a1.sinks.k1.hdfs.rollInterval=900
a1.sinks.k1.hfds.useLocalTimeStamp=true
a1.channels.c1.type=file
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
[hadoop@master flume]$ ls /usr/local/src/hadoop/
bin etc     lib     LICENSE.txt NOTICE.txt sbin   tmp
dfs include libexec logs         README.txt share
[hadoop@master flume]$ ls /usr/local/src/hadoop/logs/
hadoop-hadoop-namenode-master.example.com.log
hadoop-hadoop-namenode-master.example.com.out
hadoop-hadoop-namenode-master.example.com.out.1
hadoop-hadoop-namenode-master.example.com.out.2
hadoop-hadoop-namenode-master.example.com.out.3
hadoop-hadoop-namenode-master.example.com.out.4
hadoop-hadoop-namenode-master.example.com.out.5
hadoop-hadoop-secondarynamenode-master.example.com.log
hadoop-hadoop-secondarynamenode-master.example.com.out
hadoop-hadoop-secondarynamenode-master.example.com.out.1
hadoop-hadoop-secondarynamenode-master.example.com.out.2
hadoop-hadoop-secondarynamenode-master.example.com.out.3
hadoop-hadoop-secondarynamenode-master.example.com.out.4
hadoop-hadoop-secondarynamenode-master.example.com.out.5
SecurityAuth-hadoop.audit
yarn-hadoop-resourcemanager-master.example.com.log
yarn-hadoop-resourcemanager-master.example.com.out
yarn-hadoop-resourcemanager-master.example.com.out.1
yarn-hadoop-resourcemanager-master.example.com.out.2
yarn-hadoop-resourcemanager-master.example.com.out.3
yarn-hadoop-resourcemanager-master.example.com.out.4
yarn-hadoop-resourcemanager-master.example.com.out.5

步驟二:使用 flume-ng agent 命令載入 simple-hdfs-flume.conf 配置信息,啟 動 flume 傳輸數據。

[[hadoop@master flume]$ flume-ng agent --conf-file xxx.conf --name a1
Warning: No configuration directory set! Use --conf <dir> to override.
Info: Including Hadoop libraries found via (/usr/local/src/hadoop/bin/hadoop) for HDFS access
Info: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar from classpath
Info: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar from classpath
Info: Including HBASE libraries found via (/usr/local/src/hbase/bin/hbase) for HBASE access
Info: Excluding /usr/local/src/hbase/lib/slf4j-api-1.7.7.jar from classpath
Info: Excluding /usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar from classpath
Info: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar from classpath
Info: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar from classpath
Info: Including Hive libraries found via (/usr/local/src/hive) for Hive access
...
23/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.append.accepted == 0
23/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.append.received == 0
23/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.events.accepted == 17
23/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.events.received == 17
23/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.open-connection.count == 0
23/04/21 16:02:35 INFO source.SpoolDirectorySource: SpoolDir source r1 stopped. Metrics: SOURCE:r1{src.events.accepted=17, src.open-connection.count=0, src.append.received=0, src.append-batch.received=1, src.append-batch.accepted=1, src.append.accepted=0, src.events.received=17}

ctrl+c 退出 flume 傳輸

#首先得開啟所有節點
[hadoop@master flume]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.example.com.out
192.168.88.201: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave2.example.com.out
192.168.88.200: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave1.example.com.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-secondarynamenode-master.example.com.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-resourcemanager-master.example.com.out
192.168.88.200: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave1.example.com.out
192.168.88.201: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave2.example.com.out
[hadoop@master flume]$ ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 192.168.88.101:9000 *:*
LISTEN 0 128 *:50090 *:*
LISTEN 0 128 *:50070 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 128 ::ffff:192.168.88.101:8030 :::*
LISTEN 0 128 ::ffff:192.168.88.101:8031 :::*
LISTEN 0 128 ::ffff:192.168.88.101:8032 :::*
LISTEN 0 128 ::ffff:192.168.88.101:8033 :::*
LISTEN 0 80 :::3306 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 128 ::ffff:192.168.88.101:8088 :::*

步驟三:查看 Flume 傳輸到 HDFS 的文件,若能查看到 HDFS 上/tmp/flume 目 錄有傳輸的數據文件,則表示數據傳輸成功。

[hadoop@master flume]$ hdfs dfs -ls /
Found 5 items
drwxr-xr-x - hadoop supergroup 0 2023-04-07 15:20 /hbase
drwxr-xr-x - hadoop supergroup 0 2023-03-17 17:33 /input
drwxr-xr-x - hadoop supergroup 0 2023-03-17 18:45 /output
drwx------ - hadoop supergroup 0 2023-04-21 16:02 /tmp
drwxr-xr-x - hadoop supergroup 0 2023-04-14 20:48 /user
[hadoop@master flume]$ hdfs dfs -ls /tmp/flume
Found 72 items
-rw-r--r-- 2 hadoop supergroup 1560 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143693
-rw-r--r-- 2 hadoop supergroup 1398 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143694
-rw-r--r-- 2 hadoop supergroup 1456 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143695
-rw-r--r-- 2 hadoop supergroup 1398 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143696
-rw-r--r-- 2 hadoop supergroup 1403 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143697
-rw-r--r-- 2 hadoop supergroup 1434 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143698
-rw-r--r-- 2 hadoop supergroup 1383 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143699
...
-rw-r--r-- 2 hadoop supergroup 1508 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143760
-rw-r--r-- 2 hadoop supergroup 1361 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143761
-rw-r--r-- 2 hadoop supergroup 1359 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143762
-rw-r--r-- 2 hadoop supergroup 1502 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143763
-rw-r--r-- 2 hadoop supergroup 1399 2023-04-21 16:02 /tmp/flume/FlumeData.1682064143764

實驗二、通過命令監控大數據平臺運行狀態

1、通過命令查看大數據平臺狀態

步驟一:查看 Linux 系統的信息(uname -a)

[root@master ~]# uname -a
Linux master.example.com 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

結果顯示,該 Linux 節點名稱為 master,內核發行號為 3.10.0-693.el7.x86_64。

步驟二:查看硬碟信息

(1)查看所有分區(fdisk -l)

[root@master ~]# fdisk -l

Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000af885

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 209715199 103808000 8e Linux LVM

Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 8455 MB, 8455716864 bytes, 16515072 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-home: 44.1 GB, 44149243904 bytes, 86228992 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

結果顯示,硬碟空間為 107.4GB。

(2)查看所有交換分區(swapon -s)

[root@master ~]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 8257532 0 -1

結果顯示,交換分區為 8257532。

(3)查看文件系統占比(df -h)

[root@master ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 4.6G 46G 10% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 12M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/centos-home 42G 36M 42G 1% /home
/dev/sda1 1014M 142M 873M 14% /boot
tmpfs 378M 0 378M 0% /run/user/0

結果顯示,掛載點“/”的容量為 50G,已使用 4.6G。

步驟三:查看網路 IP 地址(ip a)

[root@master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:20:cd:03 brd ff:ff:ff:ff:ff:ff
inet 192.168.88.101/24 brd 192.168.88.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::75a3:9da2:ede2:5be7/64 scope link noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::1c1b:d0e3:f01a:7c11/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever

結果顯示 ens33 的 IP 地址為 192.168.88.101,子網掩碼為 255.255.255.0;迴環地址 為 127.0.0.1,子網掩碼為 255.0.0.0。

步驟四:查看所有監聽埠(netstat -lntp)

[root@master ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 908/sshd
tcp6 0 0 :::3306 :::* LISTEN 1053/mysqld
tcp6 0 0 :::22 :::* LISTEN 908/sshd

結果顯示,在監聽的埠分別為 22、3306。

步驟五:查看所有已經建立的連接(netstat -antp)

[root@master ~]# netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 908/sshd
tcp 0 36 192.168.88.101:22 192.168.88.1:61450 ESTABLISHED 1407/sshd: root@pts
tcp6 0 0 :::3306 :::* LISTEN 1053/mysqld
tcp6 0 0 :::22 :::* LISTEN 908/sshd

結果顯示,已經連接上的本地埠分別為 50608、22、9000、8031 等。

步驟六:實時顯示進程狀態(top),該命令可以查看進程對 CPU、記憶體的占比 等。

image-20230423214902071

步驟七:查看 CPU 信息( cat /proc/cpuinfo)

[root@master ~]# cat /proc/cpuinfo 
processor : 0
vendor_id : AuthenticAMD
cpu family : 25
model : 80
model name : AMD Ryzen 7 5800U with Radeon Graphics
stepping : 0
cpu MHz : 1896.438
cache size : 512 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 16
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl tsc_reliable nonstop_tsc extd_apicid eagerfpu pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext retpoline_amd vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero ibpb ibrs arat pku ospke overflow_recov succor
bogomips : 3792.87
TLB size : 2560 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 45 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : AuthenticAMD
cpu family : 25
model : 80
model name : AMD Ryzen 7 5800U with Radeon Graphics
stepping : 0
cpu MHz : 1896.438
cache size : 512 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 16
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl tsc_reliable nonstop_tsc extd_apicid eagerfpu pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext retpoline_amd vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero ibpb ibrs arat pku ospke overflow_recov succor
bogomips : 3792.87
TLB size : 2560 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 45 bits physical, 48 bits virtual

步驟八:查看記憶體信息( cat /proc/meminfo),該命令可以查看總記憶體、空 閑記憶體等信息。

[root@master ~]# cat /proc/meminfo 
MemTotal: 3863564 kB
MemFree: 2971196 kB
MemAvailable: 3301828 kB
Buffers: 2120 kB
Cached: 529608 kB
SwapCached: 0 kB
Active: 626584 kB
Inactive: 130828 kB
Active(anon): 226348 kB
Inactive(anon): 11152 kB
Active(file): 400236 kB
Inactive(file): 119676 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 8257532 kB
SwapFree: 8257532 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 225716 kB
Mapped: 28960 kB
Shmem: 11816 kB
Slab: 59428 kB
SReclaimable: 30680 kB
SUnreclaim: 28748 kB
KernelStack: 4432 kB
PageTables: 3664 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 10189312 kB
Committed_AS: 773684 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 185376 kB
VmallocChunk: 34359310332 kB
HardwareCorrupted: 0 kB
AnonHugePages: 180224 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 73536 kB
DirectMap2M: 3072000 kB
DirectMap1G: 3145728 kB

2、通過命令查看 Hadoop 狀態

步驟一:切換到 hadoop 用戶

[root@master ~]# su - hadoop
Last login: Fri Apr 21 15:26:05 CST 2023 on pts/0
Last failed login: Fri Apr 21 16:02:08 CST 2023 from slave1 on ssh:notty
There were 4 failed login attempts since the last successful login.

若當前的用戶為 root,請切換到 hadoop 用戶進行操作。

步驟二:切換到 Hadoop 的安裝目錄

[hadoop@master ~]$ cd /usr/local/src/hadoop/

步驟三:啟動 Hadoop

[hadoop@master ~]$ start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.example.com.out
192.168.88.201: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave2.example.com.out
192.168.88.200: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave1.example.com.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-secondarynamenode-master.example.com.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-resourcemanager-master.example.com.out
192.168.88.200: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave1.example.com.out
192.168.88.201: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave2.example.com.out

步驟四:關閉 Hadoop

[hadoop@master hadoop]$ stop-all.sh 
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [master]
master: stopping namenode
192.168.88.201: stopping datanode
192.168.88.200: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
192.168.88.200: stopping nodemanager
192.168.88.201: stopping nodemanager
no proxyserver to stop

實驗三、通過命令監控大數據平臺資源狀態

1、通過命令查看 YARN 狀態

步驟一:確認切換到目錄 /usr/local/src/hadoop

[hadoop@master hadoop]$ cd /usr/local/src/hadoop/

步驟二:返回主機界面在 Master 主機上執行 start-all.sh

#master 節點啟動 zookeeper
[hadoop@master hadoop]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

#slave1 節點啟動 zookeeper
[root@slave1 ~]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

#slave2 節點啟動 zookeeper
[root@slave2 ~]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

master節點
[hadoop@master hadoop]$ start-all.sh

步 驟 三 : 執 行 JPS 命 令 , 發 現 Master 上 有 NodeManager 進程和 ResourceManager 進程,則 YARN 啟動完成。

[hadoop@master hadoop]$ jps

執行結果如下,說明 YARN 已啟動。

[hadoop@master hadoop]$ jps
2642 QuorumPeerMain
2994 SecondaryNameNode
3154 ResourceManager
3413 Jps
2795 NameNode

2、通過命令查看 HDFS 狀態

步驟一:目錄操作

切換到 hadoop 目錄,執行 cd /usr/local/src/hadoop 命令

[hadoop@master hadoop]$ cd /usr/local/src/hadoop/

查看 HDFS 目錄

[hadoop@master hadoop]$  ./bin/hdfs dfs -ls /
Found 5 items
drwxr-xr-x - hadoop supergroup 0 2023-04-07 15:20 /hbase
drwxr-xr-x - hadoop supergroup 0 2023-03-17 17:33 /input
drwxr-xr-x - hadoop supergroup 0 2023-03-17 18:45 /output
drwx------ - hadoop supergroup 0 2023-04-21 16:02 /tmp
drwxr-xr-x - hadoop supergroup 0 2023-04-14 20:48 /user

步驟二:查看 HDSF 的報告,執行命令: bin/hdfs dfsadmin -report

[hadoop@master hadoop]$ bin/hdfs dfsadmin -report
Configured Capacity: 107321753600 (99.95 GB)
Present Capacity: 102855995392 (95.79 GB)
DFS Remaining: 102852079616 (95.79 GB)
DFS Used: 3915776 (3.73 MB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (2):

Name: 192.168.88.201:50010 (slave2)
Hostname: slave2
Decommission Status : Normal
Configured Capacity: 53660876800 (49.98 GB)
DFS Used: 1957888 (1.87 MB)
Non DFS Used: 2232373248 (2.08 GB)
DFS Remaining: 51426545664 (47.89 GB)
DFS Used%: 0.00%
DFS Remaining%: 95.84%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Apr 23 21:56:46 CST 2023


Name: 192.168.88.200:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 53660876800 (49.98 GB)
DFS Used: 1957888 (1.87 MB)
Non DFS Used: 2233384960 (2.08 GB)
DFS Remaining: 51425533952 (47.89 GB)
DFS Used%: 0.00%
DFS Remaining%: 95.83%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Apr 23 21:56:46 CST 2023

步驟三:查看 HDFS 空間情況,執行命令:hdfs dfs -df

[hadoop@master hadoop]$ hdfs dfs -df /
Filesystem Size Used Available Use%
hdfs://master:9000 107321753600 3915776 102852079616 0%

3、通過命令查看 HBase 狀態

步驟一:啟動運行 HBase

切換到 HBase 安裝目錄/usr/local/src/hbase,命令如下:

[hadoop@master hadoop]$ cd /usr/local/src/hbase/
[hadoop@master hbase]$ hbase version
HBase 1.2.1
Source code repository git://asf-dev/home/busbey/projects/hbase revision=8d8a7107dc4ccbf36a92f64675dc60392f85c015
Compiled by busbey on Wed Mar 30 11:19:21 CDT 2016
From source with checksum f4bb4a14bb4e0b72b46f729dae98a772

結果顯示 HBase1.2.1,說明 HBase 正在運行,版本號為 1.2.1。

如果沒有啟動,則執行命令 start-hbase.sh 啟動 HBase。
[hadoop@master hbase]$ start-hbase.sh
master: starting zookeeper, logging to /usr/local/src/hbase/logs/hbase-hadoop-zookeeper-master.example.com.out
slave1: starting zookeeper, logging to /usr/local/src/hbase/logs/hbase-hadoop-zookeeper-slave1.example.com.out
slave2: starting zookeeper, logging to /usr/local/src/hbase/logs/hbase-hadoop-zookeeper-slave2.example.com.out
starting master, logging to /usr/local/src/hbase/logs/hbase-hadoop-master-master.example.com.out
slave2: starting regionserver, logging to /usr/local/src/hbase/logs/hbase-hadoop-regionserver-slave2.example.com.out
slave1: starting regionserver, logging to /usr/local/src/hbase/logs/hbase-hadoop-regionserver-slave1.example.com.out

步驟二:查看 HBase 版本信息

執行命令hbase shell,進入HBase命令交互界面

[hadoop@master hbase]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016

hbase(main):001:0>

輸入 version,查詢 HBase 版本

hbase(main):001:0> version
1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016

結果顯示 HBase 版本為 1.2.1。

步驟三:查詢 HBase 狀態,在 HBase 命令交互界面,執行 status 命令

hbase(main):002:0> status
1 active master, 0 backup masters, 2 servers, 0 dead, 1.0000 average load

查詢結果顯示,1 台活動 master,0 台備份 masters,共 3 台服務主機,平均載入 時間為 0.6667 秒。

我們還可以“簡單”查詢 HBase 的狀態,執行命令 status 'simple'

hbase(main):003:0> status 'simple'
active master: master:16000 1682258321433
0 backup masters
2 live servers
slave1:16020 1682258322908
requestsPerSecond=0.0, numberOfOnlineRegions=2, usedHeapMB=19, maxHeapMB=440, numberOfStores=2, numberOfStorefiles=0, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=7, writeRequestsCount=4, rootIndexSizeKB=0 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[MultiRowMutationEndpoint]
slave2:16020 1682258322896
requestsPerSecond=0.0, numberOfOnlineRegions=0, usedHeapMB=11, maxHeapMB=440, numberOfStores=0, numberOfStorefiles=0, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[]
0 dead servers
Aggregate load: 0, regions: 2

顯示更多的關於 Master、Slave1 和 Slave2 主機的服務埠、請求時間等詳細信息。

如果需要查詢更多關於 HBase 狀態,執行命令 help 'status'

hbase(main):004:0> help 'status'
Show cluster status. Can be 'summary', 'simple', 'detailed', or 'replication'. The
default is 'summary'. Examples:

hbase> status
hbase> status 'simple'
hbase> status 'summary'
hbase> status 'detailed'
hbase> status 'replication'
hbase> status 'replication', 'source'
hbase> status 'replication', 'sink'

hbase(main):005:0> quit
[hadoop@master hbase]$

結果顯示出所有關於 status 的命令。

步驟四 停止 HBase 服務

停止 HBase 服務,則執行命令 stop-hbase.sh。

[hadoop@master hbase]$ stop-hbase.sh 
stopping hbase.................
slave1: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid
slave2: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid
master: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid

沒有錯誤提示,顯示$提示符時,即停止了 HBase 服務。

4、通過命令查看 Hive 狀態

步驟一:啟動 Hive

切換到/usr/local/src/hive 目錄,輸入 hive,回車。

[hadoop@master hbase]$ cd /usr/local/src/hive/
[hadoop@master hive]$ hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/src/hive/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/src/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/usr/local/src/hive/lib/hive-common-2.0.0.jar!/hive-log4j2.properties
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>

當顯示 hive>時,表示啟動成功,進入到了 Hive shell 狀態。

步驟二:Hive 操作基本命令

註意:Hive 命令行語句後面一定要加分號。

(1)查看資料庫

hive> show databases;
OK
default
sample
Time taken: 0.973 seconds, Fetched: 2 row(s)

顯示預設的資料庫 default。

(2)查看 default 資料庫所有表

hive> use default;
OK
Time taken: 0.02 seconds
hive> show tables;
OK
test
Time taken: 2.201 seconds, Fetched: 1 row(s)

顯示 default 數據中沒有任何表。

(3)創建表 stu,表的 id 為整數型,name 為字元型

hive> create table stu(id int,name string);
OK
Time taken: 0.432 seconds

(4)為表 stu 插入一條信息,id 號為 001,name 為liuyaling

hive> insert into stu values (001,"liuyaling");
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = hadoop_20230423222915_a95e9891-fdf5-4739-a63e-fcadecc85e28
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1682258121749_0001, Tracking URL = http://master:8088/proxy/application_1682258121749_0001/
Kill Command = /usr/local/src/hadoop/bin/hadoop job -kill job_1682258121749_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2023-04-23 22:31:36,420 Stage-1 map = 0%, reduce = 0%
2023-04-23 22:31:43,892 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.72 sec
MapReduce Total cumulative CPU time: 2 seconds 720 msec
Ended Job = job_1682258121749_0001
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: hdfs://master:9000/user/hive/warehouse/stu/.hive-staging_hive_2023-04-23_22-30-32_985_529079703757687911-1/-ext-10000
Loading data to table default.stu
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Cumulative CPU: 2.72 sec HDFS Read: 4135 HDFS Write: 79 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 720 msec
OK
Time taken: 72.401 seconds

按照以上操作,繼續插入兩條信息:id 和 name 分別為 1002、1003 和 yanhaoxiang、tnt。

(5)插入數據後查看表的信息

hive> show tables;
OK
stu
test
values__tmp__table__1
values__tmp__table__2
Time taken: 0.026 seconds, Fetched: 4 row(s)

(6)查看表 stu 結構

hive> desc stu;
OK
id int
name string
Time taken: 0.041 seconds, Fetched: 2 row(s)

(7)查看表 stu 的內容

hive> select * from stu;
OK
1 liuyaling
1002 yanhaoxiang
1002 yanhaoxiang
1003 tnt
Time taken: 0.118 seconds, Fetched: 4 row(s)

步驟三:通過 Hive 命令行界面查看文件系統和歷史命令

(1)查看本地文件系統,執行命令 ! ls /usr/local/src;

hive> ! ls /usr/local/src;
flume
hadoop
hbase
hive
jdk
sqoop
zookeeper

(2)查看 HDFS 文件系統,執行命令 dfs -ls /;

hive> dfs -ls /;
Found 5 items
drwxr-xr-x - hadoop supergroup 0 2023-04-23 21:58 /hbase
drwxr-xr-x - hadoop supergroup 0 2023-03-17 17:33 /input
drwxr-xr-x - hadoop supergroup 0 2023-03-17 18:45 /output
drwx------ - hadoop supergroup 0 2023-04-21 16:02 /tmp
drwxr-xr-x - hadoop supergroup 0 2023-04-14 20:48 /user

hive> exit;

(3)查看在 Hive 中輸入的所有歷史命令

進入到當前用戶 Hadoop 的目錄/home/hadoop,查看.hivehistory 文件。

[hadoop@master hive]$ cd /home/hadoop/
[hadoop@master ~]$ cat .hivehistory
show databases;
create database sample;
show databases;
use sample;
create table student(number string,name string);
exit
show databases;
exit;
use sample;
select * from student;
sqoop export --connect "jdbc:mysql://master:3306/sample?useUnicode=true&characterEncoding=utf-8" --username root --password Password@123! --table student --input-fields-terminated-by '|' --export-dir /user/hive/warehouse/sample.db/student/*
sqoop export --connect "jdbc:mysql://master:3306/sample?useUnicode=true&characterEncoding=utf-8" --username root --password Password@123! --table student --input-fields-terminated-by '|' --export-dir /user/hive/warehouse/sample.db/student/*;
sqoop export --connect "jdbc:mysql://master3306/sample?useUnicode=true&characterEncoding=utf-8" --username root --password Password@123! --table student --input-fields-terminated-by '|' --export-dir /user/hive/warehouse/sample.db/student/*
sqoop export --connect "jdbc:mysql://master3306/sample?useUnicode=true&characterEncoding=utf-8" --username root --password Password@123! --table student --input-fields-terminated-by '|' --export-dir /user/hive/warehouse/sample.db/student/*;
sqoop export --connect "jdbc:mysql://master3306/sample?useUnicode=true&characterEncoding=utf-8" --username root --password Password@123! --table student --input-fields-terminated-by '|' --export-dir /user/hive/warehouse/sample.db/student/*
exit
show databases;
use sample;
show tables;
use default;
show tables;
quit
'
select*from default.test;
quit;
show databases;
use default;
show tables;
insert into stu values (001,"liuyaling");
create table stu(id int,name string);
insert into stu values (001,"liuyaling");
insert into stu values (1002,"yanhaoxiang");
hive
insert into stu values (1003,"tnt");
quit
exit
exit;
quit;
show tables;
insert into stu values (1002,"yanhaoxiang");
insert into stu values (1003,"tnt");
show tables;
desc stu;
select * from stu;
! ls /usr/local/src;
dfs -ls /;
exit;

結果顯示,之前在 Hive 命令行界面下運行的所有命令(含錯誤命令)都顯示了出 來,有助於維護、故障排查等工作。

實驗四、通過命令監控大數據平臺服務狀態

1、通過命令查看 ZooKeeper 狀態

步驟一: 查看 ZooKeeper 狀態,執行命令 zkServer.sh status,結果顯示如下

[hadoop@master ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Mode: follower

以上結果中,Mode:follower 表示為 ZooKeeper 的跟隨者。

步驟二: 查看運行進程

QuorumPeerMain:QuorumPeerMain 是 ZooKeeper 集群的啟動入口類,是用來載入配 置啟動 QuorumPeer 線程的。

執行命令 jps 以查看進程情況。

[hadoop@master ~]$ jps
2642 QuorumPeerMain
2994 SecondaryNameNode
3154 ResourceManager
5400 Jps
2795 NameNode

此時 QuorumPeerMain 進程已啟動。

步驟四: 在成功啟動 ZooKeeper 服務後,輸入命令 zkCli.sh,連接到 ZooKeeper 服務。

[hadoop@master ~]$ zkCli.sh 
Connecting to localhost:2181
2023-04-23 22:39:17,093 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT
2023-04-23 22:39:17,096 [myid:] - INFO [main:Environment@100] - Client environment:host.name=master
2023-04-23 22:39:17,096 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.8.0_152
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/local/src/jdk/jre
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/usr/local/src/zookeeper/bin/../build/classes:/usr/local/src/zookeeper/bin/../build/lib/*.jar:/usr/local/src/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/src/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/src/zookeeper/bin/../lib/netty-3.7.0.Final.jar:/usr/local/src/zookeeper/bin/../lib/log4j-1.2.16.jar:/usr/local/src/zookeeper/bin/../lib/jline-0.9.94.jar:/usr/local/src/zookeeper/bin/../zookeeper-3.4.8.jar:/usr/local/src/zookeeper/bin/../src/java/lib/*.jar:/usr/local/src/zookeeper/bin/../conf:/usr/local/src/sqoop/lib:
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA>
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.10.0-862.el7.x86_64
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:user.name=hadoop
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/home/hadoop
2023-04-23 22:39:17,097 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/home/hadoop
2023-04-23 22:39:17,099 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@69d0a921
Welcome to ZooKeeper!
2023-04-23 22:39:17,122 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2023-04-23 22:39:17,208 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
2023-04-23 22:39:17,223 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x187ae88b8390000, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0]

步驟五: 使用 Watch 監聽/hbase 目錄,一旦/hbase 內容有變化,將會有提 示。打開監視,執行命令 get /hbase 1。

[zk: localhost:2181(CONNECTED) 0] get /hbase 1

cZxid = 0x500000002
ctime = Sun Apr 23 21:58:42 CST 2023
mZxid = 0x500000002
mtime = Sun Apr 23 21:58:42 CST 2023
pZxid = 0x500000062
cversion = 18
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 14

[zk: localhost:2181(CONNECTED) 1] quit
Quitting...
2023-04-23 22:40:16,816 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x187ae88b8390001 closed
2023-04-23 22:40:16,817 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x187ae88b8390001
[hadoop@master ~]$

結果顯示,當執行命令 set /hbase value-update 後,數據版本由 0 變成 1,說明 /hbase 處於監控中。

2、通過命令查看 Sqoop 狀態

步驟一: 查詢 Sqoop 版本號,驗證 Sqoop 是否啟動成功。

首先切換到/usr/local/src/sqoop 目錄,執行命令:./bin/sqoop-version

[hadoop@master ~]$ cd /usr/local/src/sqoop/
[hadoop@master sqoop]$ ./bin/sqoop-version
Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
23/04/23 22:40:59 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
Sqoop 1.4.7
git commit id 2328971411f57f0cb683dfb79d19d4d19d185dd8
Compiled by maugli on Thu Dec 21 15:59:58 STD 2017

結果顯示:Sqoop 1.4.7,說明 Sqoop 版本號為 1.4.7,並啟動成功。

步驟二: 測試 Sqoop 是否能夠成功連接資料庫

切換到 Sqoop 的 目 錄 , 執 行 命 令 bin/sqoop list-databases --connect jdbc:mysql://master:3306/ --username root --password Password123$,命令中 “master:3306”為資料庫主機名和埠。

[hadoop@master sqoop]$  bin/sqoop list-databases --connect jdbc:mysql://master:3306/ --username root --password Password@123!
Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
23/04/23 22:42:16 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
23/04/23 22:42:16 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
23/04/23 22:42:16 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
Sun Apr 23 22:42:16 CST 2023 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
information_schema
hive
mysql
performance_schema
sample
sys

結果顯示,可以連接到 MySQL,並查看到 Master 主機中 MySQL 的所有庫實例,如 information_schema、hive、mysql、performance_schema 和 sys 等資料庫。

步驟三: 執行命令 sqoop help,可以看到如下內容,代表 Sqoop 啟動成功。

[hadoop@master sqoop]$ sqoop help
Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
23/04/23 22:42:37 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
usage: sqoop COMMAND [ARGS]

Available commands:
codegen Generate code to interact with database records
create-hive-table Import a table definition into Hive
eval Evaluate a SQL statement and display the results
export Export an HDFS directory to a database table
help List available commands
import Import a table from a database to HDFS
import-all-tables Import tables from a database to HDFS
import-mainframe Import datasets from a mainframe server to HDFS
job Work with saved jobs
list-databases List available databases on a server
list-tables List available tables in a database
merge Merge results of incremental imports
metastore Run a standalone Sqoop metastore
version Display version information

See 'sqoop help COMMAND' for information on a specific command.

結果顯示了 Sqoop 的常用命令和功能,如下表所示。

序號命令功能
1 import 將數據導入到集群
2 export 將集群數據導出
3 codegen 生成與資料庫記錄交互的代碼
4 create-hivetable 創建 Hive 表
5 eval 查看 SQL 執行結果
6 import-all-tables 導入某個資料庫下所有表到 HDFS 中
7 job 生成一個 job
8 list-databases 列出所有資料庫名
您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • 前言 錄這套教程主要幫助那些對Linux瞭解很少但又想做一個初步學習的小伙伴,因為我們一般在找開發相關的工作面試時偶爾也會被問到,而且做為一名開發人員如果不會操作Linux操作系統確實也有些說不過去,因為一般我們在企業中用到的中間件、服務的部署等都是在Linux上,本期教程就帶著大家來入門下Linu ...
  • 一款輕量級、高性能、功能強大的內網穿透代理伺服器。支持tcp、udp、socks5、http等幾乎所有流量轉發,並帶有功能強大的web管理端。 ...
  • 記錄 mongo 資料庫用原生自帶的命令工具使用 json 文件方式進行導入、導出的操作! 在一次數據更新中,同事把老數據進行了清空操作,但是新的邏輯數據由於某種原因(好像是她的電腦中病毒了),一直無法正常連接資料庫進行數據插入,然後下午2點左右要給甲方演示,所以要緊急恢複本地的部分數據到生產庫。 ...
  • 功能03-優惠券秒殺01 4.功能03-優惠券秒殺 4.1全局唯一ID 4.1.1全局ID生成器 每個店鋪都可以發佈優惠券: 當用戶搶購時,就會生成訂單,並保存到tb_voucher_order這張表中。訂單表如果使用資料庫的自增id就存在一些問題: id的規律性太明顯:用戶可以根據id猜測一些信息 ...
  • 這裡介紹一下如何在Zabbix 6下麵,使用預設自帶的模板MSSQL by ODBC來監控SQL Server資料庫。官方關於Template DB MSSQL By ODBC的介紹如下鏈接所示: https://www.zabbix.com/integrations/mssql 這個項目對應的gi ...
  • 1、通過慢查日誌等定位那些執行效率較低的SQL語句 2、explain 分析SQL的執行計劃 需要重點關註type、rows、filtered、extra。 type由上至下,效率越來越高 ALL 全表掃描 index 索引全掃描 range 索引範圍掃描,常用語<,<=,>=,between,in ...
  • 4月22日,2023首屆雲資料庫技術沙龍 MySQL x ClickHouse 專場,在杭州市海智中心成功舉辦。本次沙龍由玖章算術、菜根發展、良倉太炎共創聯合主辦。圍繞“技術進化,讓數據更智能”為主題,匯聚位元組跳動、阿裡雲、玖章算術、華為雲、騰訊雲、百度的6位資料庫領域專家,深入 MySQL x C... ...
  • 摘要:“銀行業數字化轉型實踐交流會”杭州站順利收官。 由華為與北京先進數通聯合主辦的“銀行業數字化轉型實踐交流會”杭州站順利收官,會議邀請了金融科技先鋒企業、機構和多位資深專家,一起深入交流銀行業數字化轉型業務場景的探索和實踐。其中,華為雲資料庫專家在現場分享了華為雲GaussDB的前沿技術和項目實 ...
一周排行
    -Advertisement-
    Play Games
  • 移動開發(一):使用.NET MAUI開發第一個安卓APP 對於工作多年的C#程式員來說,近來想嘗試開發一款安卓APP,考慮了很久最終選擇使用.NET MAUI這個微軟官方的框架來嘗試體驗開發安卓APP,畢竟是使用Visual Studio開發工具,使用起來也比較的順手,結合微軟官方的教程進行了安卓 ...
  • 前言 QuestPDF 是一個開源 .NET 庫,用於生成 PDF 文檔。使用了C# Fluent API方式可簡化開發、減少錯誤並提高工作效率。利用它可以輕鬆生成 PDF 報告、發票、導出文件等。 項目介紹 QuestPDF 是一個革命性的開源 .NET 庫,它徹底改變了我們生成 PDF 文檔的方 ...
  • 項目地址 項目後端地址: https://github.com/ZyPLJ/ZYTteeHole 項目前端頁面地址: ZyPLJ/TreeHoleVue (github.com) https://github.com/ZyPLJ/TreeHoleVue 目前項目測試訪問地址: http://tree ...
  • 話不多說,直接開乾 一.下載 1.官方鏈接下載: https://www.microsoft.com/zh-cn/sql-server/sql-server-downloads 2.在下載目錄中找到下麵這個小的安裝包 SQL2022-SSEI-Dev.exe,運行開始下載SQL server; 二. ...
  • 前言 隨著物聯網(IoT)技術的迅猛發展,MQTT(消息隊列遙測傳輸)協議憑藉其輕量級和高效性,已成為眾多物聯網應用的首選通信標準。 MQTTnet 作為一個高性能的 .NET 開源庫,為 .NET 平臺上的 MQTT 客戶端與伺服器開發提供了強大的支持。 本文將全面介紹 MQTTnet 的核心功能 ...
  • Serilog支持多種接收器用於日誌存儲,增強器用於添加屬性,LogContext管理動態屬性,支持多種輸出格式包括純文本、JSON及ExpressionTemplate。還提供了自定義格式化選項,適用於不同需求。 ...
  • 目錄簡介獲取 HTML 文檔解析 HTML 文檔測試參考文章 簡介 動態內容網站使用 JavaScript 腳本動態檢索和渲染數據,爬取信息時需要模擬瀏覽器行為,否則獲取到的源碼基本是空的。 本文使用的爬取步驟如下: 使用 Selenium 獲取渲染後的 HTML 文檔 使用 HtmlAgility ...
  • 1.前言 什麼是熱更新 游戲或者軟體更新時,無需重新下載客戶端進行安裝,而是在應用程式啟動的情況下,在內部進行資源或者代碼更新 Unity目前常用熱更新解決方案 HybridCLR,Xlua,ILRuntime等 Unity目前常用資源管理解決方案 AssetBundles,Addressable, ...
  • 本文章主要是在C# ASP.NET Core Web API框架實現向手機發送驗證碼簡訊功能。這裡我選擇是一個互億無線簡訊驗證碼平臺,其實像阿裡雲,騰訊雲上面也可以。 首先我們先去 互億無線 https://www.ihuyi.com/api/sms.html 去註冊一個賬號 註冊完成賬號後,它會送 ...
  • 通過以下方式可以高效,並保證數據同步的可靠性 1.API設計 使用RESTful設計,確保API端點明確,並使用適當的HTTP方法(如POST用於創建,PUT用於更新)。 設計清晰的請求和響應模型,以確保客戶端能夠理解預期格式。 2.數據驗證 在伺服器端進行嚴格的數據驗證,確保接收到的數據符合預期格 ...