搭建Spark的單機版集群

来源:http://www.cnblogs.com/ivictor/archive/2016/01/18/5135792.html
-Advertisement-
Play Games

一、創建用戶# useradd spark# passwd spark二、下載軟體JDK,Scala,SBT,Maven版本信息如下:JDK jdk-7u79-linux-x64.gzScala scala-2.10.5.tgzSBT sbt-0.13.7.zipMaven apache-maven...


一、創建用戶

# useradd spark

# passwd spark

 

二、下載軟體

JDK,Scala,SBT,Maven

版本信息如下:

JDK jdk-7u79-linux-x64.gz

Scala scala-2.10.5.tgz

SBT sbt-0.13.7.zip

Maven apache-maven-3.2.5-bin.tar.gz

註意:如果只是安裝Spark環境,則只需JDK和Scala即可,SBT和Maven是為了後續的源碼編譯。

 

三、解壓上述文件併進行環境變數配置

# cd /usr/local/

# tar xvf /root/jdk-7u79-linux-x64.gz

# tar xvf /root/scala-2.10.5.tgz

# tar xvf /root/apache-maven-3.2.5-bin.tar.gz

# unzip /root/sbt-0.13.7.zip

修改環境變數的配置文件

# vim /etc/profile

export JAVA_HOME=/usr/local/jdk1.7.0_79
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export SCALA_HOME=/usr/local/scala-2.10.5
export MAVEN_HOME=/usr/local/apache-maven-3.2.5
export SBT_HOME=/usr/local/sbt
export PATH=$PATH:$JAVA_HOME/bin:$SCALA_HOME/bin:$MAVEN_HOME/bin:$SBT_HOME/bin

使配置文件生效

# source /etc/profile

測試環境變數是否生效

# java –version

java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

# scala –version

Scala code runner version 2.10.5 -- Copyright 2002-2013, LAMP/EPFL

# mvn –version

Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 2014-12-15T01:29:23+08:00)
Maven home: /usr/local/apache-maven-3.2.5
Java version: 1.7.0_79, vendor: Oracle Corporation
Java home: /usr/local/jdk1.7.0_79/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-229.el7.x86_64", arch: "amd64", family: "unix"

# sbt --version

sbt launcher version 0.13.7

 

四、主機名綁定

[root@spark01 ~]# vim /etc/hosts

192.168.244.147 spark01

 

五、配置spark

切換到spark用戶下

下載hadoop和spark,可使用wget命令下載

spark-1.4.0 http://d3kbcqa49mib13.cloudfront.net/spark-1.4.0-bin-hadoop2.6.tgz

Hadoop http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz

解壓上述文件併進行環境變數配置

修改spark用戶環境變數的配置文件

[spark@spark01 ~]$ vim .bash_profile

export SPARK_HOME=$HOME/spark-1.4.0-bin-hadoop2.6
export HADOOP_HOME=$HOME/hadoop-2.6.0
export HADOOP_CONF_DIR=$HOME/hadoop-2.6.0/etc/hadoop
export PATH=$PATH:$SPARK_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

使配置文件生效

[spark@spark01 ~]$ source .bash_profile

修改spark配置文件

[spark@spark01 ~]$ cd spark-1.4.0-bin-hadoop2.6/conf/

[spark@spark01 conf]$ cp spark-env.sh.template spark-env.sh

[spark@spark01 conf]$ vim spark-env.sh

在後面添加如下內容:

export SCALA_HOME=/usr/local/scala-2.10.5
export SPARK_MASTER_IP=spark01
export SPARK_WORKER_MEMORY=1500m
export JAVA_HOME=/usr/local/jdk1.7.0_79

有條件的童鞋可將SPARK_WORKER_MEMORY適當設大一點,因為我虛擬機記憶體是2G,所以只給了1500m。

 

配置slaves

[spark@spark01 conf]$ cp slaves slaves.template

[spark@spark01 conf]$ vim slaves

將localhost修改為spark01

 

啟動master

[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ sbin/start-master.sh

starting org.apache.spark.deploy.master.Master, logging to /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-spark-org.apache.spark.deploy.master.Master-1-spark01.out

 

查看上述日誌的輸出內容

[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ cd logs/

[spark@spark01 logs]$ cat spark-spark-org.apache.spark.deploy.master.Master-1-spark01.out

Spark Command: /usr/local/jdk1.7.0_79/bin/java -cp /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../conf/:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/spark-assembly-1.4.0-hadoop2.6.0.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/home/spark/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/home/spark/hadoop-2.6.0/etc/hadoop/ -Xms512m -Xmx512m -XX:MaxPermSize=128m org.apache.spark.deploy.master.Master --ip spark01 --port 7077 --webui-port 8080
========================================
16/01/16 15:12:30 INFO master.Master: Registered signal handlers for [TERM, HUP, INT]
16/01/16 15:12:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/16 15:12:32 INFO spark.SecurityManager: Changing view acls to: spark
16/01/16 15:12:32 INFO spark.SecurityManager: Changing modify acls to: spark
16/01/16 15:12:32 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/01/16 15:12:33 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/01/16 15:12:33 INFO Remoting: Starting remoting
16/01/16 15:12:33 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@spark01:7077]
16/01/16 15:12:33 INFO util.Utils: Successfully started service 'sparkMaster' on port 7077.
16/01/16 15:12:34 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:12:34 INFO server.AbstractConnector: Started SelectChannelConnector@spark01:6066
16/01/16 15:12:34 INFO util.Utils: Successfully started service on port 6066.
16/01/16 15:12:34 INFO rest.StandaloneRestServer: Started REST server for submitting applications on port 6066
16/01/16 15:12:34 INFO master.Master: Starting Spark master at spark://spark01:7077
16/01/16 15:12:34 INFO master.Master: Running Spark version 1.4.0
16/01/16 15:12:34 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:12:34 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:8080
16/01/16 15:12:34 INFO util.Utils: Successfully started service 'MasterUI' on port 8080.
16/01/16 15:12:34 INFO ui.MasterWebUI: Started MasterWebUI at http://192.168.244.147:8080
16/01/16 15:12:34 INFO master.Master: I have been elected leader! New state: ALIVE

 

從日誌中也可看出,master啟動正常

下麵來看看master的 web管理界面,預設在8080埠

啟動worker

[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ sbin/start-slaves.sh spark://spark01:7077

spark01: Warning: Permanently added 'spark01,192.168.244.147' (ECDSA) to the list of known hosts.
spark@spark01's password:
spark01: starting org.apache.spark.deploy.worker.Worker, logging to /home/spark/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-spark-org.apache.spark.deploy.worker.Worker-1-spark01.out

輸入spark01spark用戶的密碼

可通過日誌的信息來確認workder是否正常啟動,因信息太多,在這裡就不貼出了。

[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ cd logs/

[spark@spark01 logs]$ cat spark-spark-org.apache.spark.deploy.worker.Worker-1-spark01.out

 

啟動spark shell

[spark@spark01 spark-1.4.0-bin-hadoop2.6]$ bin/spark-shell --master spark://spark01:7077

16/01/16 15:33:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/16 15:33:18 INFO spark.SecurityManager: Changing view acls to: spark
16/01/16 15:33:18 INFO spark.SecurityManager: Changing modify acls to: spark
16/01/16 15:33:18 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/01/16 15:33:18 INFO spark.HttpServer: Starting HTTP Server
16/01/16 15:33:18 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:33:18 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:42300
16/01/16 15:33:18 INFO util.Utils: Successfully started service 'HTTP class server' on port 42300.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.4.0
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information.
16/01/16 15:33:30 INFO spark.SparkContext: Running Spark version 1.4.0
16/01/16 15:33:30 INFO spark.SecurityManager: Changing view acls to: spark
16/01/16 15:33:30 INFO spark.SecurityManager: Changing modify acls to: spark
16/01/16 15:33:30 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark)
16/01/16 15:33:31 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/01/16 15:33:31 INFO Remoting: Starting remoting
16/01/16 15:33:31 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:43850]
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'sparkDriver' on port 43850.
16/01/16 15:33:31 INFO spark.SparkEnv: Registering MapOutputTracker
16/01/16 15:33:31 INFO spark.SparkEnv: Registering BlockManagerMaster
16/01/16 15:33:31 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-7b7bd4bd-ff20-4e3d-a354-61a4ca7c4b2f/blockmgr-0e855210-3609-4204-b5e3-151e0c096c15
16/01/16 15:33:31 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB
16/01/16 15:33:31 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-7b7bd4bd-ff20-4e3d-a354-61a4ca7c4b2f/httpd-56ac16d2-dd82-41cb-99d7-4d11ef36b42e
16/01/16 15:33:31 INFO spark.HttpServer: Starting HTTP Server
16/01/16 15:33:31 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:33:31 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:47633
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'HTTP file server' on port 47633.
16/01/16 15:33:31 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/01/16 15:33:31 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/01/16 15:33:31 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/01/16 15:33:31 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/01/16 15:33:31 INFO ui.SparkUI: Started SparkUI at http://192.168.244.147:4040
16/01/16 15:33:32 INFO client.AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@spark01:7077/user/Master...
16/01/16 15:33:33 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160116153332-0000
16/01/16 15:33:33 INFO client.AppClient$ClientActor: Executor added: app-20160116153332-0000/0 on worker-20160116152314-192.168.244.147-58914 (192.168.244.147:58914) with 2 cores
16/01/16 15:33:33 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20160116153332-0000/0 on hostPort 192.168.244.147:58914 with 2 cores, 512.0 MB RAM
16/01/16 15:33:33 INFO client.AppClient$ClientActor: Executor updated: app-20160116153332-0000/0 is now LOADING
16/01/16 15:33:33 INFO client.AppClient$ClientActor: Executor updated: app-20160116153332-0000/0 is now RUNNING
16/01/16 15:33:34 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33146.
16/01/16 15:33:34 INFO netty.NettyBlockTransferService: Server created on 33146
16/01/16 15:33:34 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/01/16 15:33:34 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.244.147:33146 with 265.4 MB RAM, BlockManagerId(driver, 192.168.244.147, 33146)
16/01/16 15:33:34 INFO storage.BlockManagerMaster: Registered BlockManager
16/01/16 15:33:34 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/01/16 15:33:34 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
16/01/16 15:33:38 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
16/01/16 15:33:43 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/01/16 15:33:43 INFO metastore.ObjectStore: ObjectStore, initialize called
16/01/16 15:33:44 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/01/16 15:33:44 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/01/16 15:33:44 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://[email protected]:46741/user/Executor#-2043358626]) with ID 0
16/01/16 15:33:44 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/01/16 15:33:45 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.244.147:33017 with 265.4 MB RAM, BlockManagerId(0, 192.168.244.147, 33017)
16/01/16 15:33:46 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/01/16 15:33:48 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/01/16 15:33:48 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5.  Encountered: "@" (64), after : "".
16/01/16 15:33:52 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:52 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/01/16 15:33:54 INFO metastore.ObjectStore: Initialized ObjectStore
16/01/16 15:33:54 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa
16/01/16 15:33:55 INFO metastore.HiveMetaStore: Added admin role in metastore
16/01/16 15:33:55 INFO metastore.HiveMetaStore: Added public role in metastore
16/01/16 15:33:56 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/01/16 15:33:56 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
16/01/16 15:33:56 INFO repl.SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext.

scala>

打開spark shell以後,可以寫一個簡單的程式,say hello to the world

scala> println("helloworld")
helloworld

 

再來看看sparkweb管理界面,可以看出,多了一個WorkdersRunning Applications的信息

 

至此,Spark的偽分散式環境搭建完畢,

有以下幾點需要註意:

1. 上述中的Maven和SBT是非必須的,只是為了後續的源碼編譯,所以,如果只是單純的搭建Spark環境,可不用下載Maven和SBT。

2. 該Spark的偽分散式環境其實是集群的基礎,只需修改極少的地方,然後copy到slave節點上即可,鑒於篇幅有限,後文再表。 

 


您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • 一、先看看文檔里怎麼說 Base class for those who need to maintain global application state. You can provide your own implementation by specifying its name in your...
  • 在我們平時開發中會經常使用一些第三方開發的開源類庫。這樣會有效地提高我們開發項目的效率,在這裡我找了好幾十個進行一個彙總,供大家參考使用,方便大家在需要的時候能容易找到。 UI篇awesome-ios-ui提供了一些UI效果,有動畫,有自定義的UI。文件共【96.2Mb】.效果不錯,可供大...
  • xml界面佈局代碼: 1 2 8 9 17 18 代碼實現: 1 package com.gzcivil.ui; 2 3 import android.app.Activity; 4 import android.os.Bundle; 5 import android.os.Hand...
  • 前言:因為時間緣故,很少進行通俗易懂的演算法思路講解,這裡先展示動態圖片效果,然後後面的內容我就直接上關鍵源碼了。效果展示圖;源碼百度雲盤下載鏈接: http://pan.baidu.com/s/1eQOOixc 密碼: duu8源碼: 1 // PhotoCell.h 2 // 自定義流水佈局 .....
  • AGConnectionNet對系統網路請求進行簡單封裝,可便利的進行網路請求,並將數據解析與網路請求封裝在同一方法下,使用更加便利(JSON 解析採用自身解析方法, XML 解析採用第三方 ReadXML 進行解析).方法具體參數說明初始化方法:/*** 類方法,實例化當前數據請求對象 (單例)*...
  • 一、Apache伺服器 1. 使用最廣的 Web 伺服器 2. Mac自帶,只需要修改幾個配置就可以,簡單,快捷 3. 有些特殊的伺服器功能,Apache都能很好的支持 目的:讓有一個自己專屬的測試環境二、準備工作 1.設置用戶密碼 2.MAC 10.10及以上三、配置伺服器(此過程會用...
  • 資料庫欄位startDate 開始時間 endDate 結束時間 -兩個參數比如查2-2 至2-6 在資料庫中是否與其他時間有重疊四個條件有一項滿足則有重疊時間思路是這樣子 以開始和結束時間參數為條件分開查詢 開始時間是否包含在其他時間段內、結束時間是否包含在其他時間段內、是否有其他時間包含在參數....
  • SQL Server中@@ROWCOUNT返回受上一語句影響的行數,返回值類型為 int 整型。如果行數大於 20 億,則需要使用 ROWCOUNT_BIG。@@ROWCOUNT和@@ERROR變數的值,在執行完一條語句後總是會發生變化,所以我們將他們作為判斷的依據的時候應該首先保存在局部變數中。他...
一周排行
    -Advertisement-
    Play Games
  • 移動開發(一):使用.NET MAUI開發第一個安卓APP 對於工作多年的C#程式員來說,近來想嘗試開發一款安卓APP,考慮了很久最終選擇使用.NET MAUI這個微軟官方的框架來嘗試體驗開發安卓APP,畢竟是使用Visual Studio開發工具,使用起來也比較的順手,結合微軟官方的教程進行了安卓 ...
  • 前言 QuestPDF 是一個開源 .NET 庫,用於生成 PDF 文檔。使用了C# Fluent API方式可簡化開發、減少錯誤並提高工作效率。利用它可以輕鬆生成 PDF 報告、發票、導出文件等。 項目介紹 QuestPDF 是一個革命性的開源 .NET 庫,它徹底改變了我們生成 PDF 文檔的方 ...
  • 項目地址 項目後端地址: https://github.com/ZyPLJ/ZYTteeHole 項目前端頁面地址: ZyPLJ/TreeHoleVue (github.com) https://github.com/ZyPLJ/TreeHoleVue 目前項目測試訪問地址: http://tree ...
  • 話不多說,直接開乾 一.下載 1.官方鏈接下載: https://www.microsoft.com/zh-cn/sql-server/sql-server-downloads 2.在下載目錄中找到下麵這個小的安裝包 SQL2022-SSEI-Dev.exe,運行開始下載SQL server; 二. ...
  • 前言 隨著物聯網(IoT)技術的迅猛發展,MQTT(消息隊列遙測傳輸)協議憑藉其輕量級和高效性,已成為眾多物聯網應用的首選通信標準。 MQTTnet 作為一個高性能的 .NET 開源庫,為 .NET 平臺上的 MQTT 客戶端與伺服器開發提供了強大的支持。 本文將全面介紹 MQTTnet 的核心功能 ...
  • Serilog支持多種接收器用於日誌存儲,增強器用於添加屬性,LogContext管理動態屬性,支持多種輸出格式包括純文本、JSON及ExpressionTemplate。還提供了自定義格式化選項,適用於不同需求。 ...
  • 目錄簡介獲取 HTML 文檔解析 HTML 文檔測試參考文章 簡介 動態內容網站使用 JavaScript 腳本動態檢索和渲染數據,爬取信息時需要模擬瀏覽器行為,否則獲取到的源碼基本是空的。 本文使用的爬取步驟如下: 使用 Selenium 獲取渲染後的 HTML 文檔 使用 HtmlAgility ...
  • 1.前言 什麼是熱更新 游戲或者軟體更新時,無需重新下載客戶端進行安裝,而是在應用程式啟動的情況下,在內部進行資源或者代碼更新 Unity目前常用熱更新解決方案 HybridCLR,Xlua,ILRuntime等 Unity目前常用資源管理解決方案 AssetBundles,Addressable, ...
  • 本文章主要是在C# ASP.NET Core Web API框架實現向手機發送驗證碼簡訊功能。這裡我選擇是一個互億無線簡訊驗證碼平臺,其實像阿裡雲,騰訊雲上面也可以。 首先我們先去 互億無線 https://www.ihuyi.com/api/sms.html 去註冊一個賬號 註冊完成賬號後,它會送 ...
  • 通過以下方式可以高效,並保證數據同步的可靠性 1.API設計 使用RESTful設計,確保API端點明確,並使用適當的HTTP方法(如POST用於創建,PUT用於更新)。 設計清晰的請求和響應模型,以確保客戶端能夠理解預期格式。 2.數據驗證 在伺服器端進行嚴格的數據驗證,確保接收到的數據符合預期格 ...