Win7 Eclipse 搭建spark java1.8編譯環境,JavaRDD的helloworld例子 ...
[學習筆記]
Win7 Eclipse 搭建spark java1.8編譯環境,JavaRDD的helloworld例子:
在eclipse oxygen上創建一個普通的java項目,然後把spark-assembly-1.6.1-hadoop2.6.0.jar這個包導進工程就ok了。
package com;
import java.util.Arrays;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaDoubleRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
public class CollectTest {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("testCollect防盜版實名手機尾號:73203").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
List<Double> list = Arrays.asList(1.0, 4.0, 3.0, 7.0,5.0);
JavaDoubleRDD doubleRdd = sc.parallelizeDoubles(list, 2);
/*註意下麵Function裡面的兩個參數Double, Double,代表著call裡面的輸入和返回兩個參數。*/
JavaRDD<Double> mapRdd = doubleRdd.map(new Function<Double, Double>() {
public Double call(Double in) throws Exception {
return in + 2;
}
});
List<Double> douList = mapRdd.collect();
for (Double d : douList) {
System.out.println("d:" + d);
}
}
}
文章轉載自原文:https://blog.csdn.net/qq_44596980/article/details/93384494