(转)groupByKey 和reduceByKey 的区别

2024-05-24 10:32

本文主要是介绍(转)groupByKey 和reduceByKey 的区别,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

【转载原文:https://blog.csdn.net/ZMC921/article/details/75098903】

 

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。

本文链接:https://blog.csdn.net/ZMC921/article/details/75098903

一、首先他们都是要经过shuffle的,groupByKey在方法shuffle之间不会合并原样进行shuffle,。reduceByKey进行shuffle之前会先做合并,这样就减少了shuffle的io传送,所以效率高一点。

案例:

object GroupyKeyAndReduceByKeyDemo {def main(args: Array[String]): Unit = {Logger.getLogger("org").setLevel(Level.WARN)val config = new SparkConf().setAppName("GroupyKeyAndReduceByKeyDemo").setMaster("local")val sc = new SparkContext(config)val arr = Array("val config", "val arr")val socketDS = sc.parallelize(arr).flatMap(_.split(" ")).map((_, 1))//groupByKey 和reduceByKey 的区别://他们都是要经过shuffle的,groupByKey在方法shuffle之间不会合并原样进行shuffle,//reduceByKey进行shuffle之前会先做合并,这样就减少了shuffle的io传送,所以效率高一点socketDS.groupByKey().map(tuple => (tuple._1, tuple._2.sum)).foreach(x => {println(x._1 + " " + x._2)})println("----------------------")socketDS.reduceByKey(_ + _).foreach(x => {println(x._1 + " " + x._2)})sc.stop()}
}

二 、首先groupByKey有三种

查看源码groupByKey()实现了 groupByKey(defaultPartitioner(self))

/*** Group the values for each key in the RDD into a single sequence. Hash-partitions the* resulting RDD with the existing partitioner/parallelism level. The ordering of elements* within each group is not guaranteed, and may even differ each time the resulting RDD is* evaluated.** @note This operation may be very expensive. If you are grouping in order to perform an* aggregation (such as a sum or average) over each key, using `PairRDDFunctions.aggregateByKey`* or `PairRDDFunctions.reduceByKey` will provide much better performance.*/def groupByKey(): RDD[(K, Iterable[V])] = self.withScope {groupByKey(defaultPartitioner(self))}

查看源码 groupByKey(numPartitions: Int) 实现了 groupByKey(new HashPartitioner(numPartitions))

/*** Group the values for each key in the RDD into a single sequence. Hash-partitions the* resulting RDD with into `numPartitions` partitions. The ordering of elements within* each group is not guaranteed, and may even differ each time the resulting RDD is evaluated.** @note This operation may be very expensive. If you are grouping in order to perform an* aggregation (such as a sum or average) over each key, using `PairRDDFunctions.aggregateByKey`* or `PairRDDFunctions.reduceByKey` will provide much better performance.** @note As currently implemented, groupByKey must be able to hold all the key-value pairs for any* key in memory. If a key has too many values, it can result in an `OutOfMemoryError`.*/def groupByKey(numPartitions: Int): RDD[(K, Iterable[V])] = self.withScope {groupByKey(new HashPartitioner(numPartitions))}

其实上面两个都是实现了groupByKey(partitioner: Partitioner)

/*** Group the values for each key in the RDD into a single sequence. Allows controlling the* partitioning of the resulting key-value pair RDD by passing a Partitioner.* The ordering of elements within each group is not guaranteed, and may even differ* each time the resulting RDD is evaluated.** @note This operation may be very expensive. If you are grouping in order to perform an* aggregation (such as a sum or average) over each key, using `PairRDDFunctions.aggregateByKey`* or `PairRDDFunctions.reduceByKey` will provide much better performance.** @note As currently implemented, groupByKey must be able to hold all the key-value pairs for any* key in memory. If a key has too many values, it can result in an `OutOfMemoryError`.*/def groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])] = self.withScope {// groupByKey shouldn't use map side combine because map side combine does not// reduce the amount of data shuffled and requires all map side data be inserted// into a hash table, leading to more objects in the old gen.val createCombiner = (v: V) => CompactBuffer(v)val mergeValue = (buf: CompactBuffer[V], v: V) => buf += vval mergeCombiners = (c1: CompactBuffer[V], c2: CompactBuffer[V]) => c1 ++= c2val bufs = combineByKeyWithClassTag[CompactBuffer[V]](createCombiner, mergeValue, mergeCombiners, partitioner, mapSideCombine = false)bufs.asInstanceOf[RDD[(K, Iterable[V])]]}

而groupByKey(partitioner: Partitioner)有实现了combineByKeyWithClassTag,所以可以说groupByKey其实底层都是combineByKeyWithClassTag的实现,只是实现的方式不同。

 

三、再查看reduceByKey也有三种方式

 
/*** Merge the values for each key using an associative and commutative reduce function. This will* also perform the merging locally on each mapper before sending results to a reducer, similarly* to a "combiner" in MapReduce. Output will be hash-partitioned with the existing partitioner/* parallelism level.*/def reduceByKey(func: (V, V) => V): RDD[(K, V)] = self.withScope {reduceByKey(defaultPartitioner(self), func)}/*** Merge the values for each key using an associative and commutative reduce function. This will* also perform the merging locally on each mapper before sending results to a reducer, similarly* to a "combiner" in MapReduce. Output will be hash-partitioned with numPartitions partitions.*/def reduceByKey(func: (V, V) => V, numPartitions: Int): RDD[(K, V)] = self.withScope {reduceByKey(new HashPartitioner(numPartitions), func)}
 
/*** Merge the values for each key using an associative and commutative reduce function. This will* also perform the merging locally on each mapper before sending results to a reducer, similarly* to a "combiner" in MapReduce.*/def reduceByKey(partitioner: Partitioner, func: (V, V) => V): RDD[(K, V)] = self.withScope {combineByKeyWithClassTag[V]((v: V) => v, func, func, partitioner)}

通过查看这三种reduceByKey不难发现,前两种是最后一种的实现。而最后一种是又实现了combineByKeyWithClassTag。

 

### groupByKey是这样实现的

combineByKeyWithClassTag[CompactBuffer[V]](createCombiner, mergeValue, mergeCombiners, partitioner, mapSideCombine = false)

### reduceByKey是这样实现的

combineByKeyWithClassTag[V]((v: V) => v, func, func, partitioner)

对比上面发现,groupByKey设置了mapSideCombine = false,在map端不进行合并,那就是在shuffle前不合并。而reduceByKey没有设置

难道reduceByKey默认合并吗????

 

四、接下来,我们仔细看一下combineByKeyWithClassTag

/*** :: Experimental ::* Generic function to combine the elements for each key using a custom set of aggregation* functions. Turns an RDD[(K, V)] into a result of type RDD[(K, C)], for a "combined type" C** Users provide three functions:** - `createCombiner`, which turns a V into a C (e.g., creates a one-element list)* - `mergeValue`, to merge a V into a C (e.g., adds it to the end of a list)* - `mergeCombiners`, to combine two C's into a single one.** In addition, users can control the partitioning of the output RDD, and whether to perform* map-side aggregation (if a mapper can produce multiple items with the same key).** @note V and C can be different -- for example, one might group an RDD of type* (Int, Int) into an RDD of type (Int, Seq[Int]).*/@Experimentaldef combineByKeyWithClassTag[C](createCombiner: V => C,mergeValue: (C, V) => C,mergeCombiners: (C, C) => C,partitioner: Partitioner,mapSideCombine: Boolean = true,serializer: Serializer = null)(implicit ct: ClassTag[C]): RDD[(K, C)] = self.withScope {require(mergeCombiners != null, "mergeCombiners must be defined") // required as of Spark 0.9.0if (keyClass.isArray) {if (mapSideCombine) {throw new SparkException("Cannot use map-side combining with array keys.")}if (partitioner.isInstanceOf[HashPartitioner]) {throw new SparkException("HashPartitioner cannot partition array keys.")}}val aggregator = new Aggregator[K, V, C](self.context.clean(createCombiner),self.context.clean(mergeValue),self.context.clean(mergeCombiners))if (self.partitioner == Some(partitioner)) {self.mapPartitions(iter => {val context = TaskContext.get()new InterruptibleIterator(context, aggregator.combineValuesByKey(iter, context))}, preservesPartitioning = true)} else {new ShuffledRDD[K, V, C](self, partitioner).setSerializer(serializer).setAggregator(aggregator).setMapSideCombine(mapSideCombine)}}

通过查看combineByKeyWithClassTag的,发现reduceByKey默认在map端进行合并,那就是在shuffle前进行合并,如果合并了一些数据,那在shuffle时进行溢写则减少了磁盘IO,所以reduceByKey会快一些。

这篇关于(转)groupByKey 和reduceByKey 的区别的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/998154

相关文章

JAVA覆盖和重写的区别及说明

《JAVA覆盖和重写的区别及说明》非静态方法的覆盖即重写,具有多态性;静态方法无法被覆盖,但可被重写(仅通过类名调用),二者区别在于绑定时机与引用类型关联性... 目录Java覆盖和重写的区别经常听到两种话认真读完上面两份代码JAVA覆盖和重写的区别经常听到两种话1.覆盖=重写。2.静态方法可andro

C++中全局变量和局部变量的区别

《C++中全局变量和局部变量的区别》本文主要介绍了C++中全局变量和局部变量的区别,全局变量和局部变量在作用域和生命周期上有显著的区别,下面就来介绍一下,感兴趣的可以了解一下... 目录一、全局变量定义生命周期存储位置代码示例输出二、局部变量定义生命周期存储位置代码示例输出三、全局变量和局部变量的区别作用域

MyBatis中$与#的区别解析

《MyBatis中$与#的区别解析》文章浏览阅读314次,点赞4次,收藏6次。MyBatis使用#{}作为参数占位符时,会创建预处理语句(PreparedStatement),并将参数值作为预处理语句... 目录一、介绍二、sql注入风险实例一、介绍#(井号):MyBATis使用#{}作为参数占位符时,会

Android kotlin中 Channel 和 Flow 的区别和选择使用场景分析

《Androidkotlin中Channel和Flow的区别和选择使用场景分析》Kotlin协程中,Flow是冷数据流,按需触发,适合响应式数据处理;Channel是热数据流,持续发送,支持... 目录一、基本概念界定FlowChannel二、核心特性对比数据生产触发条件生产与消费的关系背压处理机制生命周期

Javaee多线程之进程和线程之间的区别和联系(最新整理)

《Javaee多线程之进程和线程之间的区别和联系(最新整理)》进程是资源分配单位,线程是调度执行单位,共享资源更高效,创建线程五种方式:继承Thread、Runnable接口、匿名类、lambda,r... 目录进程和线程进程线程进程和线程的区别创建线程的五种写法继承Thread,重写run实现Runnab

C++中NULL与nullptr的区别小结

《C++中NULL与nullptr的区别小结》本文介绍了C++编程中NULL与nullptr的区别,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编... 目录C++98空值——NULLC++11空值——nullptr区别对比示例 C++98空值——NUL

Conda与Python venv虚拟环境的区别与使用方法详解

《Conda与Pythonvenv虚拟环境的区别与使用方法详解》随着Python社区的成长,虚拟环境的概念和技术也在不断发展,:本文主要介绍Conda与Pythonvenv虚拟环境的区别与使用... 目录前言一、Conda 与 python venv 的核心区别1. Conda 的特点2. Python v

Go语言中make和new的区别及说明

《Go语言中make和new的区别及说明》:本文主要介绍Go语言中make和new的区别及说明,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录1 概述2 new 函数2.1 功能2.2 语法2.3 初始化案例3 make 函数3.1 功能3.2 语法3.3 初始化

深度解析Spring Boot拦截器Interceptor与过滤器Filter的区别与实战指南

《深度解析SpringBoot拦截器Interceptor与过滤器Filter的区别与实战指南》本文深度解析SpringBoot中拦截器与过滤器的区别,涵盖执行顺序、依赖关系、异常处理等核心差异,并... 目录Spring Boot拦截器(Interceptor)与过滤器(Filter)深度解析:区别、实现

Before和BeforeClass的区别及说明

《Before和BeforeClass的区别及说明》:本文主要介绍Before和BeforeClass的区别及说明,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录Before和BeforeClass的区别一个简单的例子当运行这个测试类时总结Before和Befor