Spark SQL可以使用JDBC/ODBC或命令行介面充當分散式查詢引擎。這種模式,用戶或者應用程式可以直接與Spark SQL交互,以運行SQL查詢,無需編寫任何代碼。Spark SQL提供兩種方式來運行SQL:通過運行Thrift Server直接執行Spark SQL命令行運行Thrift ... ...
Spark SQL可以使用JDBC/ODBC或命令行介面充當分散式查詢引擎。這種模式,用戶或者應用程式可以直接與Spark SQL交互,以運行SQL查詢,無需編寫任何代碼。
Spark SQL提供兩種方式來運行SQL:
- 通過運行Thrift Server
- 直接執行Spark SQL命令行
運行Thrift Server方式
1、先運行Hive metastore
nohup hive --service metastore &
2、在 hdfs-site.xml 中添加以下配置
<property>
<name>fs.hdfs.impl.disable.cache</name>
<value>true</value>
</property>
3、啟動Thrift Server
[root@node1 sbin]# pwd
/export/servers/spark-2.2.0-bin-hadoop2.6/sbin[root@node1 sbin]# ./start-thriftserver.sh --master local[*]
starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /export/servers/spark-2.2.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-node1.out
預設的埠是:10000
註意:啟動 Thrift Server 的命令相容spark-submit的所有命令
4、使用 beeline 連接 Thrift Server
[root@node1 bin]# ./beeline
Beeline version 1.2.1.spark2 by Apache Hive
beeline> !connect jdbc:hive2://node1:10000
Connecting to jdbc:hive2://node1:10000
Enter username for jdbc:hive2://node1:10000: root
Enter password for jdbc:hive2://node1:10000:
20/02/01 22:26:41 INFO jdbc.Utils: Supplied authorities: node1:10000
20/02/01 22:26:41 INFO jdbc.Utils: Resolved authority: node1:10000
20/02/01 22:26:41 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://node1:10000
Connected to: Spark SQL (version 2.2.0)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://node1:10000> show databases;
+---------------+--+
| databaseName |
+---------------+--+
| default |
| demo |
| job_analysis |
| test |
+---------------+--+
4 rows selected (0.629 seconds)