1. 首先安裝Scala插件,File->Settings->Plugins,搜索出Scla插件,點擊Install安裝; 2. File->New Project->maven,新建一個Maven項目,填寫GroupId和ArtifactId; 3. 編輯pom.xml文件,添加項目所需要的依賴: ...
1. 首先安裝Scala插件,File->Settings->Plugins,搜索出Scla插件,點擊Install安裝;
2. File->New Project->maven,新建一個Maven項目,填寫GroupId和ArtifactId;
3. 編輯pom.xml文件,添加項目所需要的依賴:
<properties> <scala.version>2.10.5</scala.version> <hadoop.version>2.6.5</hadoop.version> </properties> <repositories> <repository> <id>scala-tools.org</id> <name>Scala-Tools Maven2 Repository</name> <url>http://scala-tools.org/repo-releases</url> </repository> </repositories> <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.6.0</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.10</artifactId> <version>1.6.0</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_2.10</artifactId> <version>1.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>${hadoop.version}</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>${hadoop.version}</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>${hadoop.version}</version> </dependency> </dependencies>
4. File->Project Structure->Libraries,選擇和Spark運行環境一致的Scala版本:
5. File->Project Structure->Modules,在src/main/下麵增加一個scala文件夾,並且設置成source文件夾;
6. 在scala文件夾下麵新建一個scala文件SparkPi:
import scala.math.random
import org.apache.spark._
object SparkPi {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Spark Pi").setMaster("spark://master:7077").setJars(Seq("E:\\Intellij\\Projects\\SparkExample\\SparkExample.jar"))
val spark = new SparkContext(conf)
val slices = if (args.length > 0) args(0).toInt else 2
println("Time:" + spark.startTime)
val n = math.min(1000L * slices, Int.MaxValue).toInt // avoid overflow
val count = spark.parallelize(1 until n, slices).map { i =>
val x = random * 2 - 1
val y = random * 2 - 1
if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _)
println("Pi is roughly " + 4.0 * count / n)
spark.stop()
}
}
7. File->Project Structure->Artifacts,新建一個Jar->From modules with dependencies...,選擇Main Class:
設置Output directory,刪掉不必要的jar:
7. Build->Build Artifacts...,生成jar,然後再運行,成功!