vlambda博客
学习文章列表

部署Spark2.2集群(on Yarn模式)

欢迎访问我的GitHub

https://github.com/zq2599/blog_demos

内容:原创文章分类汇总及配套源码,涉及Java、Docker、K8S、Devops等

机器规划

本次实战用到了三台CentOS7的机器,身份信息如下所示:

IP地址 hostname(主机名) 身份
192.168.119.163 node0 NameNode、ResourceManager、HistoryServer、Master
192.168.119.164 node1 DataNode、NodeManager、Worker
192.168.119.165 node2 DataNode、NodeManager、Worker  、SecondaryNameNode

要注意的地方:

  1. spark的Master和hdfs的NameNode、Yarn的ResourceManager在同一台机器;
  2. spark的Worker和hdfs的DataNode、Yarn的NodeManager在同一台机器;

先部署和启动hadoop集群环境

部署spark2.2集群on Yarn模式的前提,是先搭建好hadoop集群环境,请参考《Linux部署hadoop2.7.7集群》一文将hadoop集群环境部署并启动成功;

部署spark集群

  1. 本次实战的部署方式,是先部署standalone模式的spark集群,再做少量配置修改,即可改为on Yarn模式;
  2. standalone模式的spark集群部署,请参考《 》一文,要注意的是spark集群的master和hadoop集群的NameNode是同一台机器,worker和DataNode在是同一台机器,并且建议spark和hadoop部署都用同一个账号来进行;

修改配置

如果您已经完成了hadoop集群和spark集群(standalone模式)的部署,接下来只需要两步设置即可:

  1. 假设hadoop的文件夹hadoop-2.7.7所在目录为 /home/hadoop/,打开spark的spark-env.sh文件,在尾部追加一行:
export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.7.7/etc/hadoop
  1. 打开hadoop-2.7.7/etc/hadoop/yarn-site.xml文件,在configuration节点中增加下面两个子节点,如果不做以下设置,在提交spark任务的时候,yarn可能将spark任务kill掉,导致"Failed to send RPC xxxxxx"异常:
<property> <name>yarn.nodemanager.pmem-check-enabled</name> <value>false</value></property><property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value></property>

本次实战一共有三台电脑,请确保在每台电脑上都做了上述配置;

启动hadoop和spark

hadoop和spark都部署在当前账号的家目录下,因此启动命令和顺序如下:

~/hadoop-2.7.7/sbin/start-dfs.sh \&& ~/hadoop-2.7.7/sbin/start-yarn.sh \&& ~/hadoop-2.7.7/sbin/mr-jobhistory-daemon.sh start historyserver \&& ~/spark-2.3.2-bin-hadoop2.7/sbin/start-all.sh

验证spark

  1. 在hdfs创建一个目录用于保存输入文件:
~/hadoop-2.7.7/bin/hdfs dfs -mkdir /input
  1. 准备一个txt文件(我这里是GoneWiththeWind.txt),提交到hdfs的/input目录下:
~/hadoop-2.7.7/bin/hdfs dfs -put ~/GoneWiththeWind.txt /input
  1. 以client模式启动spark-shell
~/spark-2.3.2-bin-hadoop2.7/bin/spark-shell --master yarn --deploy-mode client

以下信息表示启动成功:

2019-02-09 10:13:09 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicableSetting default log level to "WARN".To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).2019-02-09 10:13:15 WARN Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.Spark context Web UI available at http://node0:4040Spark context available as 'sc' (master = yarn, app id = application_1549678248927_0001).Spark session available as 'spark'.Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.3.2 /_/ Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191)Type in expressions to have them evaluated.Type :help for more information.
scala>
  1. 输入以下内容,即可统计之前提交的txt文件中的单词出现次数,然后将前十名打印出来:
sc.textFile("hdfs://node0:8020/input/GoneWiththeWind.txt").flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _).sortBy(_._2,false).take(10).foreach(println)

控制台输出如下,可见任务执行成功:

scala> sc.textFile("hdfs://node0:8020/input/GoneWiththeWind.txt").flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _).sortBy(_._2,false).take(10).foreach(println)(the,18264) (and,14150)(to,10020)(of,8615)(a,7571)(her,7086)(she,6217)(was,5912)(in,5751)(had,4502)
  1. 在网页上查看yarn信息,如下图:

java版本的任务提交

如果您的开发语言是java,请将应用编译构建为jar包,然后执行以下命令,就会以client模式提交任务到yarn:

~/spark-2.3.2-bin-hadoop2.7/bin/spark-submit \--master yarn \--deploy-mode client \--class com.bolingcavalry.sparkwordcount.WordCount \--executor-memory 512m \--total-executor-cores 2 \~/jars/sparkwordcount-1.0-SNAPSHOT.jar \192.168.119.163 \8020 \GoneWiththeWind.txt

上述命令的最后三个参数是WorkCount类运行时需要用到的参数,该应用的详情请参考《》

停止hadoop和spark

如果需要停止hadoop和spark服务,命令和顺序如下:

~/spark-2.3.2-bin-hadoop2.7/sbin/stop-all.sh \&& ~/hadoop-2.7.7/sbin/mr-jobhistory-daemon.sh stop historyserver \&& ~/hadoop-2.7.7/sbin/stop-yarn.sh \&& ~/hadoop-2.7.7/sbin/stop-dfs.sh

至此,Spark on Yarn模式的集群部署和验证已经完成,希望能够带给您一些参考;