以Spark-Client模式運行,Spark-Submit時出現了下麵的錯誤: 意思是說Container要用2.2GB的記憶體,而虛擬記憶體只有2.1GB,不夠用了,所以Kill了Container。 我的SPARK-EXECUTOR-MEMORY設置的是1G,即物理記憶體是1G,Yarn預設的虛擬內 ...
以Spark-Client模式運行,Spark-Submit時出現了下麵的錯誤:
User: hadoop
Name: Spark Pi
Application Type: SPARK
Application Tags:
YarnApplicationState: FAILED
FinalStatus Reported by AM: FAILED
Started: 16-五月-2017 10:03:02
Elapsed: 14sec
Tracking URL: History
Diagnostics: Application application_1494900155967_0001 failed 2 times due to AM Container for appattempt_1494900155967_0001_000002 exited with exitCode: -103
For more detailed output, check application tracking page:http://master:8088/proxy/application_1494900155967_0001/Then, click on links to logs of each attempt.
Diagnostics: Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits. Current usage: 107.3 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
意思是說Container要用2.2GB的記憶體,而虛擬記憶體只有2.1GB,不夠用了,所以Kill了Container。
我的SPARK-EXECUTOR-MEMORY設置的是1G,即物理記憶體是1G,Yarn預設的虛擬記憶體和物理記憶體比例是2.1,也就是說虛擬記憶體是2.1G,小於了需要的記憶體2.2G。解決的辦法是把擬記憶體和物理記憶體比例增大,在yarn-site.xml中增加一個設置:
<property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>2.5</value> </property>
再重啟Yarn,這樣一來就能有2.5G的虛擬記憶體,運行時就不會出錯了。