Spark问题12之kryoserializer shuffle size 不够,出现overflow

2024-06-02 15:58

本文主要是介绍Spark问题12之kryoserializer shuffle size 不够,出现overflow,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

 

更多代码请见:https://github.com/xubo245/SparkLearning

Spark生态之Alluxio学习 版本:alluxio(tachyon) 0.7.1,spark-1.5.2,hadoop-2.6.0

1.问题描述

1.1

运行cs-bwamem是出现序列化shuffle overflow问题,主要是需要输出sam到本地,文件比较大,默认的是:

spark.kryoserializer.buffer.max 64m

而实际大于2G,所以不够。spark webUI显示14.5 GBinput。。。。 而且是collect操作

2.脚本运行

hadoop@Master:~/disk2/xubo/project/alignment/cs-bwamem$ cat csbwamemAlignP1Test10sam.sh 
#for t in 1 2 3 4 5 6 7 8 9 10 15 20 30 40 50 60 70 80 90 100 120 140 160 180 200 300 400 500 600 700 800 900 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
#for t in 1 10 50 100 400 1000 5000 10000
#for t in {90..400..10}
for t in 100
do
for k in {7..9}
do 
#for j in 10000 100000 1000000 10000000
for j in 10000000
do
for i in 50
do
echo $i
echo $j
echo 't'$t
echo 'k'$k
#fq='g38L'$i'c'$j'Nhs20Paired'$k'.fq'
#fq0='g38L'$i'c'$j'Nhs20Paired*.fastq'
#fq1='/xubo/alignment/sparkBWA/g38L'$i'c'$j'Nhs20Paired1.fastq'
#fq2='/xubo/alignment/sparkBWA/g38L'$i'c'$j'Nhs20Paired2.fastq'
#out='g38L'$i'c'$j'Nhs20Paired12.sam'
#out='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P64bn200000000t'$t'k'$k'sbatch.adam'
out='/home/hadoop/disk2/xubo/project/alignment/cs-bwamem/newg38L'$i'c'$j'Nhs20Paired12P64bn200000000.sam'
file='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P64bn200000000.fastq'
#out='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P1k'$k'.adam'
#file='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P1.fastq'echo $file
echo $outspark-submit --class cs.ucla.edu.bwaspark.BWAMEMSpark --total-executor-cores 20 --executor-cores 2 --executor-memory 20G \
--master spark://219.219.220.149:7077 /home/hadoop/disk2/xubo/tools/cloud-scale-bwamem-0.2.2/target/cloud-scale-bwamem-0.2.2-assembly.jar \
cs-bwamem -bfn 1 -bPSW 1 -sbatch $t -bPSWJNI 1  -oChoice 1 -oPath $out -localRef 1 \
-jniPath /home/hadoop/disk2/xubo/tools/cloud-scale-bwamem-0.2.2/target/jniNative.so \
-isSWExtBatched 1  1 \
/home/hadoop/disk2/xubo/ref/GRCH38L1Index/GRCH38chr1L3556522.fasta  $file#spark-submit --executor-memory 6g --class cs.ucla.edu.bwaspark.BWAMEMSpark --total-executor-cores 20 --master spark://219.219.220.149:7077  --conf spark.driver.host=219.219.220.149 --conf spark.driver.cores=4 --conf spark.driver.maxResultSize=6g --conf spark.storage.memoryFraction=0.7  --conf spark.akka.threads=2 --conf spark.akka.frameSize=1024 /home/hadoop/xubo/tools/cloud-scale-bwamem-0.2.1/target/cloud-scale-bwamem-0.2.0-assembly.jar merge hdfs://219.219.220.149:9000 $file $out#/xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired1.fastq /xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired2.fastq \
#/xubo/alignment/output/sparkBWA/datatestLocalGRCH38chr1L3556522N10L50paired12YarnMasterdone 
done
done
done
#--master spark://219.219.220.149:7077 /home/hadoop/disk2/xubo/tools/cloud-scale-bwamem-0.2.1/target/cloud-scale-bwamem-0.2.0-assembly.jar \
#--master spark://219.219.220.149:7077 /curr/pengwei/github/cloud-scale-bwamem/target/cloud-scale-bwamem-0.2.0-assembly.jar \

3.运行记录:

hadoop@Master:~/disk2/xubo/project/alignment/cs-bwamem$ ./csbwamemAlignP1Test10sam.sh > csbwamemAlignP1Test10samtime201702281645.txt
[Stage 2:>                                                        (0 + 16) / 64]17/02/28 16:57:37 ERROR TaskSetManager: Task 9 in stage 2.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 2.0 failed 4 times, most recent failure: Lost task 9.3 in stage 2.0 (TID 180, Mcnode4): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 109. To avoid this, increase spark.kryoserializer.buffer.max value.at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:263)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)at java.lang.Thread.run(Thread.java:745)Driver stacktrace:at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)at scala.Option.foreach(Option.scala:236)at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)at org.apache.spark.rdd.RDD.collect(RDD.scala:908)at cs.ucla.edu.bwaspark.FastMap$.memPairEndMapping(FastMap.scala:397)at cs.ucla.edu.bwaspark.FastMap$.memMain(FastMap.scala:144)at cs.ucla.edu.bwaspark.BWAMEMSpark$.main(BWAMEMSpark.scala:318)at cs.ucla.edu.bwaspark.BWAMEMSpark.main(BWAMEMSpark.scala)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 109. To avoid this, increase spark.kryoserializer.buffer.max value.at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:263)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)at java.lang.Thread.run(Thread.java:745)

参考

【1】http://spark.apache.org/docs/1.5.2/programming-guide.html
【2】https://github.com/xubo245/SparkLearning

这篇关于Spark问题12之kryoserializer shuffle size 不够,出现overflow的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1024422

相关文章

线上Java OOM问题定位与解决方案超详细解析

《线上JavaOOM问题定位与解决方案超详细解析》OOM是JVM抛出的错误,表示内存分配失败,:本文主要介绍线上JavaOOM问题定位与解决方案的相关资料,文中通过代码介绍的非常详细,需要的朋... 目录一、OOM问题核心认知1.1 OOM定义与技术定位1.2 OOM常见类型及技术特征二、OOM问题定位工具

Vue3绑定props默认值问题

《Vue3绑定props默认值问题》使用Vue3的defineProps配合TypeScript的interface定义props类型,并通过withDefaults设置默认值,使组件能安全访问传入的... 目录前言步骤步骤1:使用 defineProps 定义 Props步骤2:设置默认值总结前言使用T

Web服务器-Nginx-高并发问题

《Web服务器-Nginx-高并发问题》Nginx通过事件驱动、I/O多路复用和异步非阻塞技术高效处理高并发,结合动静分离和限流策略,提升性能与稳定性... 目录前言一、架构1. 原生多进程架构2. 事件驱动模型3. IO多路复用4. 异步非阻塞 I/O5. Nginx高并发配置实战二、动静分离1. 职责2

解决升级JDK报错:module java.base does not“opens java.lang.reflect“to unnamed module问题

《解决升级JDK报错:modulejava.basedoesnot“opensjava.lang.reflect“tounnamedmodule问题》SpringBoot启动错误源于Jav... 目录问题描述原因分析解决方案总结问题描述启动sprintboot时报以下错误原因分析编程异js常是由Ja

MySQL 表空却 ibd 文件过大的问题及解决方法

《MySQL表空却ibd文件过大的问题及解决方法》本文给大家介绍MySQL表空却ibd文件过大的问题及解决方法,本文给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友参考... 目录一、问题背景:表空却 “吃满” 磁盘的怪事二、问题复现:一步步编程还原异常场景1. 准备测试源表与数据

解决Nginx启动报错Job for nginx.service failed because the control process exited with error code问题

《解决Nginx启动报错Jobfornginx.servicefailedbecausethecontrolprocessexitedwitherrorcode问题》Nginx启... 目录一、报错如下二、解决原因三、解决方式总结一、报错如下Job for nginx.service failed bec

SysMain服务可以关吗? 解决SysMain服务导致的高CPU使用率问题

《SysMain服务可以关吗?解决SysMain服务导致的高CPU使用率问题》SysMain服务是超级预读取,该服务会记录您打开应用程序的模式,并预先将它们加载到内存中以节省时间,但它可能占用大量... 在使用电脑的过程中,CPU使用率居高不下是许多用户都遇到过的问题,其中名为SysMain的服务往往是罪魁

MySQ中出现幻读问题的解决过程

《MySQ中出现幻读问题的解决过程》文章解析MySQLInnoDB通过MVCC与间隙锁机制在可重复读隔离级别下解决幻读,确保事务一致性,同时指出性能影响及乐观锁等替代方案,帮助开发者优化数据库应用... 目录一、幻读的准确定义与核心特征幻读 vs 不可重复读二、mysql隔离级别深度解析各隔离级别的实现差异

C++ vector越界问题的完整解决方案

《C++vector越界问题的完整解决方案》在C++开发中,std::vector作为最常用的动态数组容器,其便捷性与性能优势使其成为处理可变长度数据的首选,然而,数组越界访问始终是威胁程序稳定性的... 目录引言一、vector越界的底层原理与危害1.1 越界访问的本质原因1.2 越界访问的实际危害二、基

Python多线程应用中的卡死问题优化方案指南

《Python多线程应用中的卡死问题优化方案指南》在利用Python语言开发某查询软件时,遇到了点击搜索按钮后软件卡死的问题,本文将简单分析一下出现的原因以及对应的优化方案,希望对大家有所帮助... 目录问题描述优化方案1. 网络请求优化2. 多线程架构优化3. 全局异常处理4. 配置管理优化优化效果1.