vlambda博客
学习文章列表

一网打尽HDFS、MapReduce、Yarn实战参数调优

点击上方蓝字
关注我吧

1

Hadoop小文件优化方法


1、Hadoop小文件弊端:

HDFS上每个文件都要在NameNode上创建对应的元数据,这个元数据的大小约为150byte,这样当小文件比较多的时候,就会产生很多的元数据文件,一方面会大量占用NameNode的内存空间,另一方面就是元数据文件过多,使得寻址索引速度变慢。

小文件过多,在进行MR计算时,会生成过多切片,需要启动过多的MapTask。每个MapTask处理的数据量小,导致MapTask的处理时间比启动时间还小,白白消耗资源。

2、 Hadoop小文件解决方案

1)在数据采集的时候,就将小文件或小批数据合成大文件再上传HDFS(数据源头)

2)Hadoop Archive(存储方向)

是一个高效的将小文件放入HDFS块中的文件存档工具,能够将多个小文件打包成一个HAR文件,从而达到减少NameNode的内存使用

3)CombineTextInputFormat(计算方向)

CombineTextInputFormat用于将多个小文件在切片过程中生成一个单独的切片或者少量的切片。

4)开启uber模式,实现JVM重用(计算方向)

默认情况下,每个Task任务都需要启动一个JVM来运行,如果Task任务计算的数据量很小,我们可以让同一个Job的多个Task运行在一个JVM中,不必为每个Task都开启一个JVM。

(1)未开启uber模式,在/input路径上上传多个小文件并执行wordcount程序

[atguigu@hadoop102 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output2


(2)观察控制台

2021-02-1416:13:50,607 INFO mapreduce.Job: Job job_1613281510851_0002 running in uber mode : false


(3)观察http://hadoop103:8088/cluster

一网打尽HDFS、MapReduce、Yarn实战参数调优

一网打尽HDFS、MapReduce、Yarn实战参数调优

一网打尽HDFS、MapReduce、Yarn实战参数调优

(4)开启uber模式,在mapred-site.xml中添加如下配置

<!-- 开启uber模式,默认关闭 --><property> <name>mapreduce.job.ubertask.enable</name> <value>true</value></property> <!--uber模式中最大的mapTask数量,可向下修改 --><property> <name>mapreduce.job.ubertask.maxmaps</name> <value>9</value></property><!--uber模式中最大的reduce数量,可向下修改 --><property> <name>mapreduce.job.ubertask.maxreduces</name> <value>1</value></property><!--uber模式中最大的输入数据量,默认使用dfs.blocksize 的值,可向下修改 --><property> <name>mapreduce.job.ubertask.maxbytes</name> <value></value></property>


(5)分发配置

[atguigu@hadoop102 hadoop]$ xsync mapred-site.xml


(6)再次执行wordcount程序

[atguigu@hadoop102 hadoop-3.1.3]$ hadoop jarshare/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input/output2


(7)观察控制台

2021-02-14 16:28:36,198 INFO mapreduce.Job: Jobjob_1613281510851_0003 running in uber mode : true


(8)观察http://hadoop103:8088/cluster

一网打尽HDFS、MapReduce、Yarn实战参数调优


2

测试MapReduce计算性能


使用Sort程序评测MapReduce

注:一个虚拟机不超过150G磁盘尽量不要执行这段代码

(1)使用RandomWriter来产生随机数,每个节点运行10个Map任务,每个Map产生大约1G大小的二进制随机数

[atguigu@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar randomwriter random-data


(2)执行Sort程序

[atguigu@hadoop102 mapreduce]$ hadoop jar/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jarsort random-data sorted-data


(3)验证数据是否真正排好序了

[atguigu@hadoop102 mapreduce]$hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar testmapredsort -sortInput random-data -sortOutput sorted-data

3

企业开发场景案例


1、 需求

(1)需求:从1G数据中,统计每个单词出现次数。服务器3台,每台配置4G内存,4核CPU,4线程。

(2)需求分析:

1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster

平均每个节点运行10个 / 3台≈ 3个任务(4     3     3)

2、 HDFS参数调优

(1)修改:hadoop-env.sh

export HDFS_NAMENODE_OPTS="-Dhadoop.security.logger=INFO,RFAS -Xmx1024m" export HDFS_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS -Xmx1024m"


(2)修改hdfs-site.xml

<!-- NameNode有一个工作线程池,默认值是10 --><property> <name>dfs.namenode.handler.count</name> <value>21</value></property>


(3)修改core-site.xml

<!-- 配置垃圾回收时间为60分钟 --><property> <name>fs.trash.interval</name> <value>60</value></property>


(4)分发配置

[atguigu@hadoop102 hadoop]$ xsync hadoop-env.sh hdfs-site.xml core-site.xml


3、 MapReduce参数调优

(1)修改mapred-site.xml

<!-- 环形缓冲区大小,默认100m --><property> <name>mapreduce.task.io.sort.mb</name> <value>100</value></property> <!-- 环形缓冲区溢写阈值,默认0.8 --><property> <name>mapreduce.map.sort.spill.percent</name> <value>0.80</value></property> <!-- merge合并次数,默认10个 --><property> <name>mapreduce.task.io.sort.factor</name> <value>10</value></property> <!-- maptask内存,默认1g;maptask堆内存大小默认和该值大小一致mapreduce.map.java.opts --><property> <name>mapreduce.map.memory.mb</name> <value>-1</value> <description>The amount of memory to request from the schedulerfor each map task. If this is notspecified or is non-positive, it is inferred from mapreduce.map.java.opts andmapreduce.job.heap.memory-mb.ratio. If java-opts are also not specified, we setit to 1024. </description></property> <!-- matask的CPU核数,默认1个 --><property> <name>mapreduce.map.cpu.vcores</name> <value>1</value></property> <!-- matask异常重试次数,默认4次 --><property> <name>mapreduce.map.maxattempts</name> <value>4</value></property> <!-- 每个Reduce去Map中拉取数据的并行数。默认值是5 --><property> <name>mapreduce.reduce.shuffle.parallelcopies</name> <value>5</value></property> <!-- Buffer大小占Reduce可用内存的比例,默认值0.7 --><property> <name>mapreduce.reduce.shuffle.input.buffer.percent</name> <value>0.70</value></property> <!-- Buffer中的数据达到多少比例开始写入磁盘,默认值0.66。--><property> <name>mapreduce.reduce.shuffle.merge.percent</name> <value>0.66</value></property> <!-- reducetask内存,默认1g;reducetask堆内存大小默认和该值大小一致mapreduce.reduce.java.opts --><property> <name>mapreduce.reduce.memory.mb</name> <value>-1</value> <description>The amount of memory to request from the schedulerfor each reduce task. If this is notspecified or is non-positive, it is inferred from mapreduce.reduce.java.opts and mapreduce.job.heap.memory-mb.ratio. If java-opts arealso not specified, we set it to 1024. </description></property> <!-- reducetask的CPU核数,默认1个 --><property> <name>mapreduce.reduce.cpu.vcores</name> <value>2</value></property> <!-- reducetask失败重试次数,默认4次 --><property> <name>mapreduce.reduce.maxattempts</name> <value>4</value></property> <!-- 当MapTask完成的比例达到该值后才会为ReduceTask申请资源。默认是0.05 --><property> <name>mapreduce.job.reduce.slowstart.completedmaps</name> <value>0.05</value></property> <!-- 如果程序在规定的默认10分钟内没有读到数据,将强制超时退出 --><property> <name>mapreduce.task.timeout</name> <value>600000</value></property>


(2)分发配置

[atguigu@hadoop102 hadoop]$ xsync mapred-site.xml


4、 Yarn参数调优

(1)修改yarn-site.xml配置参数如下:

<!-- 选择调度器,默认容量 --><property> <description>Theclass to use as the resource scheduler.</description> <name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value></property> <!-- ResourceManager处理调度器请求的线程数量,默认50;如果提交的任务数大于50,可以增加该值,但是不能超过3台 * 4线程 = 12线程(去除其他应用程序实际不能超过8) --><property> <description>Numberof threads to handle scheduler interface.</description> <name>yarn.resourcemanager.scheduler.client.thread-count</name> <value>8</value></property> <!-- 是否让yarn自动检测硬件进行配置,默认是false,如果该节点有很多其他应用程序,建议手动配置。如果该节点没有其他应用程序,可以采用自动 --><property> <description>Enableauto-detection of node capabilities such as memoryand CPU. </description> <name>yarn.nodemanager.resource.detect-hardware-capabilities</name> <value>false</value></property> <!-- 是否将虚拟核数当作CPU核数,默认是false,采用物理CPU核数 --><property> <description>Flagto determine if logical processors(such as hyperthreads)should be counted as cores. Only applicableon Linux whenyarn.nodemanager.resource.cpu-vcores is set to -1 and yarn.nodemanager.resource.detect-hardware-capabilitiesis true. </description> <name>yarn.nodemanager.resource.count-logical-processors-as-cores</name> <value>false</value></property> <!-- 虚拟核数和物理核数乘数,默认是1.0 --><property> <description>Multiplierto determine how to convert phyiscal cores to vcores.This value is used if yarn.nodemanager.resource.cpu-vcores isset to -1(which implies auto-calculate vcores) and yarn.nodemanager.resource.detect-hardware-capabilitiesis set to true. Thenumber of vcores willbe calculated asnumber of CPUs *multiplier. </description> <name>yarn.nodemanager.resource.pcores-vcores-multiplier</name> <value>1.0</value></property> <!-- NodeManager使用内存数,默认8G,修改为4G内存 --><property> <description>Amountof physical memory, in MB, that can be allocated forcontainers. If set to -1 and yarn.nodemanager.resource.detect-hardware-capabilitiesis true, it is automaticallycalculated(in case of Windows and Linux). In other cases, thedefault is 8192MB. </description> <name>yarn.nodemanager.resource.memory-mb</name> <value>4096</value></property> <!-- nodemanager的CPU核数,不按照硬件环境自动设定时默认是8个,修改为4个 --><property> <description>Numberof vcores that can be allocated forcontainers. This is used by the RM scheduler when allocating resourcesfor containers. This is not used to limit the number of CPUsused by YARN containers. If it isset to -1 and yarn.nodemanager.resource.detect-hardware-capabilitiesis true, it is automatically determinedfrom the hardware in case of Windows and Linux. Inother cases, number of vcores is 8 by default.</description> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>4</value></property> <!-- 容器最小内存,默认1G --><property> <description>Theminimum allocation for every container request at the RM in MBs. Memory requests lower than this will be set to the value ofthis property. Additionally, a nodemanager that is configured to have less memorythanthis value will be shut down by the resource manager. </description> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value></property> <!-- 容器最大内存,默认8G,修改为2G --><property> <description>Themaximum allocation for every container request at the RM in MBs. Memory requests higher than this will throw an InvalidResourceRequestException. </description> <name>yarn.scheduler.maximum-allocation-mb</name> <value>2048</value></property> <!-- 容器最小CPU核数,默认1个 --><property> <description>Theminimum allocation for every container request at the RM in terms of virtual CPU cores. Requests lower than this will be setto thevalue of this property.Additionally, a node manager that is configured to have fewer virtual cores than this value will be shut down by theresource manager. </description> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value></property> <!-- 容器最大CPU核数,默认4个,修改为2个 --><property> <description>Themaximum allocation for every container request at the RM in terms of virtual CPU cores. Requests higher than this will throwan InvalidResourceRequestException.</description> <name>yarn.scheduler.maximum-allocation-vcores</name> <value>2</value></property> <!-- 虚拟内存检查,默认打开,修改为关闭 --><property> <description>Whethervirtual memory limits will be enforced for containers.</description> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value></property> <!-- 虚拟内存和物理内存设置比例,默认2.1 --><property> <description>Ratiobetween virtual memory to physical memory when settingmemory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by thisratio. </description> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>2.1</value></property>


(2)分发配置

[atguigu@hadoop102 hadoop]$ xsync yarn-site.xml


5、 执行程序

(1)重启集群

[atguigu@hadoop102 hadoop-3.1.3]$sbin/stop-yarn.sh[atguigu@hadoop103 hadoop-3.1.3]$sbin/start-yarn.sh


(2)执行WordCount程序

[atguigu@hadoop102 hadoop-3.1.3]$hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output


(3)观察Yarn任务执行页面

http://hadoop103:8088/cluster/apps



一网打尽HDFS、MapReduce、Yarn实战参数调优

一网打尽HDFS、MapReduce、Yarn实战参数调优

B站|大数据那些事


想获取更多更全资料

扫码加好友入群

欢迎各位大佬加入开源共享

共同面对大数据领域疑难问题

来稿请投邮箱:[email protected]