相對於易用性很差Apache Hadoop,其他商業版Hadoop的性能易用性都有更好的表現,如Cloudera、Hortonworks、MapR以及國產的星環,下麵使用CDH(Cloudera Distribution Hadoop)快速體驗下。首先從,從Cloudera官網下載部署好的虛擬機環境... ...
相對於易用性很差Apache Hadoop,其他商業版Hadoop的性能易用性都有更好的表現,如Cloudera、Hortonworks、MapR以及國產的星環,下麵使用CDH(Cloudera Distribution Hadoop)快速體驗下。
首先從,從Cloudera官網下載部署好的虛擬機環境https://www.cloudera.com/downloads/quickstart_vms/5-13.html.html,解壓後用虛擬機打開,官方推薦至少8G記憶體2cpu,由於筆記本性能足夠,我改為8G記憶體8cpu啟動,虛擬機各種賬號密碼都是cloudera
打開虛擬機的瀏覽器訪問http://quickstart.cloudera/#/
點擊Get Started以體驗
Tutorial Exercise 1:導入、查詢關係數據
利用sqoop工具將mysql數據導入HDFS中
[cloudera@quickstart ~]$ sqoop import-all-tables \ > -m 1 \ > --connect jdbc:mysql://quickstart:3306/retail_db \ > --username=retail_dba \ > --password=cloudera \ > --compression-codec=snappy \ > --as-parquetfile \ > --warehouse-dir=/user/hive/warehouse \ > --hive-import Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 19/04/29 18:31:46 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.0 19/04/29 18:31:46 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 19/04/29 18:31:46 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 19/04/29 18:31:46 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 19/04/29 18:31:46 WARN tool.BaseSqoopTool: It seems that you're doing hive import directly into default (many more lines suppressed) Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=87 CPU time spent (ms)=3690 Physical memory (bytes) snapshot=443174912 Virtual memory (bytes) snapshot=1616969728 Total committed heap usage (bytes)=352845824 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=0 19/04/29 18:38:27 INFO mapreduce.ImportJobBase: Transferred 46.1328 KB in 85.1717 seconds (554.6442 bytes/sec) 19/04/29 18:38:27 INFO mapreduce.ImportJobBase: Retrieved 1345 records. [cloudera@quickstart ~]$ hadoop fs -ls /user/hive/warehouse/ Found 6 items drwxrwxrwx - cloudera supergroup 0 2019-04-29 18:32 /user/hive/warehouse/categories drwxrwxrwx - cloudera supergroup 0 2019-04-29 18:33 /user/hive/warehouse/customers drwxrwxrwx - cloudera supergroup 0 2019-04-29 18:34 /user/hive/warehouse/departments drwxrwxrwx - cloudera supergroup 0 2019-04-29 18:35 /user/hive/warehouse/order_items drwxrwxrwx - cloudera supergroup 0 2019-04-29 18:36 /user/hive/warehouse/orders drwxrwxrwx - cloudera supergroup 0 2019-04-29 18:38 /user/hive/warehouse/products [cloudera@quickstart ~]$ hadoop fs -ls /user/hive/warehouse/categories/ Found 3 items drwxr-xr-x - cloudera supergroup 0 2019-04-29 18:31 /user/hive/warehouse/categories/.metadata drwxr-xr-x - cloudera supergroup 0 2019-04-29 18:32 /user/hive/warehouse/categories/.signals -rw-r--r-- 1 cloudera supergroup 1957 2019-04-29 18:32 /user/hive/warehouse/categories/6e701a22-4f74-4623-abd1-965077105fd3.parquet [cloudera@quickstart ~]$
然後訪問http://quickstart.cloudera:8888/,來訪問表(invalidate metadata;是用來刷新元數據的)
Tutorial Exercise 2 :外部表方式導入訪問日誌數據到HDFS並查詢
通過hive建表
CREATE EXTERNAL TABLE intermediate_access_logs ( ip STRING, date STRING, method STRING, url STRING, http_version STRING, code1 STRING, code2 STRING, dash STRING, user_agent STRING) ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' WITH SERDEPROPERTIES ( 'input.regex' = '([^ ]*) - - \\[([^\\]]*)\\] "([^\ ]*) ([^\ ]*) ([^\ ]*)" (\\d*) (\\d*) "([^"]*)" "([^"]*)"', 'output.format.string' = "%1$$s %2$$s %3$$s %4$$s %5$$s %6$$s %7$$s %8$$s %9$$s") LOCATION '/user/hive/warehouse/original_access_logs'; CREATE EXTERNAL TABLE tokenized_access_logs ( ip STRING, date STRING, method STRING, url STRING, http_version STRING, code1 STRING, code2 STRING, dash STRING, user_agent STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION '/user/hive/warehouse/tokenized_access_logs'; ADD JAR /usr/lib/hive/lib/hive-contrib.jar; INSERT OVERWRITE TABLE tokenized_access_logs SELECT * FROM intermediate_access_logs;
impala中刷新元數據後訪問表
Tutorial Exercise 3:使用spark進行關聯分析
Tutorial Exercise 4:利用flume收集日誌,並用solr做全文索引
Tutorial Exercise 5:可視化
Tutorial is over!