一次spark sql 优化的经历: SparkException: Job aborted / spark.yarn.executor.memoryOverhead

本文主要是介绍一次spark sql 优化的经历: SparkException: Job aborted / spark.yarn.executor.memoryOverhead,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

问题背景

某天 跑 sparkSQL 的时候,遇到报错:
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter . w r i t e ( F i l e F o r m a t W r i t e r . s c a l a : 198 ) a t o r g . a p a c h e . s p a r k . s q l . h i v e . e x e c u t i o n . S a v e A s H i v e F i l e .write(FileFormatWriter.scala:198) at org.apache.spark.sql.hive.execution.SaveAsHiveFile .write(FileFormatWriter.scala:198)atorg.apache.spark.sql.hive.execution.SaveAsHiveFileclass.saveAsHiveFile(SaveAsHiveFile.scala:86)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.saveAsHiveFile(InsertIntoHiveTable.scala:66)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:195)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult l z y c o m p u t e ( c o m m a n d s . s c a l a : 104 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . c o m m a n d . D a t a W r i t i n g C o m m a n d E x e c . s i d e E f f e c t R e s u l t ( c o m m a n d s . s c a l a : 102 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . c o m m a n d . D a t a W r i t i n g C o m m a n d E x e c . e x e c u t e C o l l e c t ( c o m m a n d s . s c a l a : 115 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset lzycompute(commands.scala:104)atorg.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)atorg.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115)atorg.apache.spark.sql.Dataset$anonfun 6. a p p l y ( D a t a s e t . s c a l a : 194 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t 6.apply(Dataset.scala:194) at org.apache.spark.sql.Dataset 6.apply(Dataset.scala:194)atorg.apache.spark.sql.Dataset$anonfun 6. a p p l y ( D a t a s e t . s c a l a : 194 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t 6.apply(Dataset.scala:194) at org.apache.spark.sql.Dataset 6.apply(Dataset.scala:194)atorg.apache.spark.sql.Dataset$anonfun 52. a p p l y ( D a t a s e t . s c a l a : 3370 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . S Q L E x e c u t i o n 52.apply(Dataset.scala:3370) at org.apache.spark.sql.execution.SQLExecution 52.apply(Dataset.scala:3370)atorg.apache.spark.sql.execution.SQLExecution a n o n f u n anonfun anonfunwithNewExecutionId 1. a p p l y ( S Q L E x e c u t i o n . s c a l a : 80 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . S Q L E x e c u t i o n 1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution 1.apply(SQLExecution.scala:80)atorg.apache.spark.sql.execution.SQLExecution.withSQLConfPropagated(SQLExecution.scala:127)
at org.apache.spark.sql.execution.SQLExecution . w i t h N e w E x e c u t i o n I d ( S Q L E x e c u t i o n . s c a l a : 75 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t . o r g .withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.Dataset.org .withNewExecutionId(SQLExecution.scala:75)atorg.apache.spark.sql.Dataset.orgapache s p a r k spark sparksql D a t a s e t Dataset Dataset w i t h A c t i o n ( D a t a s e t . s c a l a : 3369 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t . < i n i t > ( D a t a s e t . s c a l a : 194 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t withAction(Dataset.scala:3369) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194) at org.apache.spark.sql.Dataset withAction(Dataset.scala:3369)atorg.apache.spark.sql.Dataset.<init>(Dataset.scala:194)atorg.apache.spark.sql.Dataset.ofRows(Dataset.scala:79)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:643)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:371)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver . m a i n ( S p a r k S Q L C L I D r i v e r . s c a l a : 274 ) a t o r g . a p a c h e . s p a r k . s q l . h i v e . t h r i f t s e r v e r . S p a r k S Q L C L I D r i v e r . m a i n ( S p a r k S Q L C L I D r i v e r . s c a l a ) a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( N a t i v e M e t h o d A c c e s s o r I m p l . j a v a : 62 ) a t s u n . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( D e l e g a t i n g M e t h o d A c c e s s o r I m p l . j a v a : 43 ) a t j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( M e t h o d . j a v a : 498 ) a t o r g . a p a c h e . s p a r k . d e p l o y . J a v a M a i n A p p l i c a t i o n . s t a r t ( S p a r k A p p l i c a t i o n . s c a l a : 52 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . o r g .main(SparkSQLCLIDriver.scala:274) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org .main(SparkSQLCLIDriver.scala:274)atorg.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)atjava.lang.reflect.Method.invoke(Method.java:498)atorg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)atorg.apache.spark.deploy.SparkSubmit.orgapache s p a r k spark sparkdeploy S p a r k S u b m i t SparkSubmit SparkSubmit$runMain(SparkSubmit.scala:845)
at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 161 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 184 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 86 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:161)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 920 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:920)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 6 in stage 16.0 failed 4 times, most recent failure: Lost task 6.3 in stage 16.0 (TID 478, idc-sql-dms-13, executor 40): ExecutorLostFailure (executor 40 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 11.8 GB of 11 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org a p a c h e apache apachespark s c h e d u l e r scheduler schedulerDAGScheduler f a i l J o b A n d I n d e p e n d e n t S t a g e s ( D A G S c h e d u l e r . s c a l a : 1925 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r failJobAndIndependentStages(DAGScheduler.scala:1925) at org.apache.spark.scheduler.DAGScheduler failJobAndIndependentStages(DAGScheduler.scala:1925)atorg.apache.spark.scheduler.DAGScheduleranonfun$abortStage 1. a p p l y ( D A G S c h e d u l e r . s c a l a : 1913 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r 1.apply(DAGScheduler.scala:1913) at org.apache.spark.scheduler.DAGScheduler 1.apply(DAGScheduler.scala:1913)atorg.apache.spark.scheduler.DAGScheduler a n o n f u n anonfun anonfunabortStage 1. a p p l y ( D A G S c h e d u l e r . s c a l a : 1912 ) a t s c a l a . c o l l e c t i o n . m u t a b l e . R e s i z a b l e A r r a y 1.apply(DAGScheduler.scala:1912) at scala.collection.mutable.ResizableArray 1.apply(DAGScheduler.scala:1912)atscala.collection.mutable.ResizableArrayclass.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1912)
at org.apache.spark.scheduler.DAGSchedulerKaTeX parse error: Can't use function '$' in math mode at position 8: anonfun$̲handleTaskSetFa…anonfun$handleTaskSetFailed 1. a p p l y ( D A G S c h e d u l e r . s c a l a : 948 ) a t s c a l a . O p t i o n . f o r e a c h ( O p t i o n . s c a l a : 257 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r . h a n d l e T a s k S e t F a i l e d ( D A G S c h e d u l e r . s c a l a : 948 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r E v e n t P r o c e s s L o o p . d o O n R e c e i v e ( D A G S c h e d u l e r . s c a l a : 2146 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r E v e n t P r o c e s s L o o p . o n R e c e i v e ( D A G S c h e d u l e r . s c a l a : 2095 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r E v e n t P r o c e s s L o o p . o n R e c e i v e ( D A G S c h e d u l e r . s c a l a : 2084 ) a t o r g . a p a c h e . s p a r k . u t i l . E v e n t L o o p 1.apply(DAGScheduler.scala:948) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:948) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2146) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2095) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2084) at org.apache.spark.util.EventLoop 1.apply(DAGScheduler.scala:948)atscala.Option.foreach(Option.scala:257)atorg.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:948)atorg.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2146)atorg.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2095)atorg.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2084)atorg.apache.spark.util.EventLoop$anon 1. r u n ( E v e n t L o o p . s c a l a : 49 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r . r u n J o b ( D A G S c h e d u l e r . s c a l a : 759 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t . r u n J o b ( S p a r k C o n t e x t . s c a l a : 2061 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . d a t a s o u r c e s . F i l e F o r m a t W r i t e r 1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:759) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter 1.run(EventLoop.scala:49)atorg.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:759)atorg.apache.spark.SparkContext.runJob(SparkContext.scala:2061)atorg.apache.spark.sql.execution.datasources.FileFormatWriter.write(FileFormatWriter.scala:167)
… 35 more
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter . w r i t e ( F i l e F o r m a t W r i t e r . s c a l a : 198 ) a t o r g . a p a c h e . s p a r k . s q l . h i v e . e x e c u t i o n . S a v e A s H i v e F i l e .write(FileFormatWriter.scala:198) at org.apache.spark.sql.hive.execution.SaveAsHiveFile .write(FileFormatWriter.scala:198)atorg.apache.spark.sql.hive.execution.SaveAsHiveFileclass.saveAsHiveFile(SaveAsHiveFile.scala:86)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.saveAsHiveFile(InsertIntoHiveTable.scala:66)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:195)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult l z y c o m p u t e ( c o m m a n d s . s c a l a : 104 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . c o m m a n d . D a t a W r i t i n g C o m m a n d E x e c . s i d e E f f e c t R e s u l t ( c o m m a n d s . s c a l a : 102 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . c o m m a n d . D a t a W r i t i n g C o m m a n d E x e c . e x e c u t e C o l l e c t ( c o m m a n d s . s c a l a : 115 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset lzycompute(commands.scala:104)atorg.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)atorg.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115)atorg.apache.spark.sql.Dataset$anonfun 6. a p p l y ( D a t a s e t . s c a l a : 194 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t 6.apply(Dataset.scala:194) at org.apache.spark.sql.Dataset 6.apply(Dataset.scala:194)atorg.apache.spark.sql.Dataset$anonfun 6. a p p l y ( D a t a s e t . s c a l a : 194 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t 6.apply(Dataset.scala:194) at org.apache.spark.sql.Dataset 6.apply(Dataset.scala:194)atorg.apache.spark.sql.Dataset$anonfun 52. a p p l y ( D a t a s e t . s c a l a : 3370 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . S Q L E x e c u t i o n 52.apply(Dataset.scala:3370) at org.apache.spark.sql.execution.SQLExecution 52.apply(Dataset.scala:3370)atorg.apache.spark.sql.execution.SQLExecution a n o n f u n anonfun anonfunwithNewExecutionId 1. a p p l y ( S Q L E x e c u t i o n . s c a l a : 80 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . S Q L E x e c u t i o n 1.apply(SQLExecution.scala:80) at org.apache.spark.sql.execution.SQLExecution 1.apply(SQLExecution.scala:80)atorg.apache.spark.sql.execution.SQLExecution.withSQLConfPropagated(SQLExecution.scala:127)
at org.apache.spark.sql.execution.SQLExecution . w i t h N e w E x e c u t i o n I d ( S Q L E x e c u t i o n . s c a l a : 75 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t . o r g .withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.Dataset.org .withNewExecutionId(SQLExecution.scala:75)atorg.apache.spark.sql.Dataset.orgapache s p a r k spark sparksql D a t a s e t Dataset Dataset w i t h A c t i o n ( D a t a s e t . s c a l a : 3369 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t . < i n i t > ( D a t a s e t . s c a l a : 194 ) a t o r g . a p a c h e . s p a r k . s q l . D a t a s e t withAction(Dataset.scala:3369) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194) at org.apache.spark.sql.Dataset withAction(Dataset.scala:3369)atorg.apache.spark.sql.Dataset.<init>(Dataset.scala:194)atorg.apache.spark.sql.Dataset.ofRows(Dataset.scala:79)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:643)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:371)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver . m a i n ( S p a r k S Q L C L I D r i v e r . s c a l a : 274 ) a t o r g . a p a c h e . s p a r k . s q l . h i v e . t h r i f t s e r v e r . S p a r k S Q L C L I D r i v e r . m a i n ( S p a r k S Q L C L I D r i v e r . s c a l a ) a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e 0 ( N a t i v e M e t h o d ) a t s u n . r e f l e c t . N a t i v e M e t h o d A c c e s s o r I m p l . i n v o k e ( N a t i v e M e t h o d A c c e s s o r I m p l . j a v a : 62 ) a t s u n . r e f l e c t . D e l e g a t i n g M e t h o d A c c e s s o r I m p l . i n v o k e ( D e l e g a t i n g M e t h o d A c c e s s o r I m p l . j a v a : 43 ) a t j a v a . l a n g . r e f l e c t . M e t h o d . i n v o k e ( M e t h o d . j a v a : 498 ) a t o r g . a p a c h e . s p a r k . d e p l o y . J a v a M a i n A p p l i c a t i o n . s t a r t ( S p a r k A p p l i c a t i o n . s c a l a : 52 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . o r g .main(SparkSQLCLIDriver.scala:274) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org .main(SparkSQLCLIDriver.scala:274)atorg.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)atjava.lang.reflect.Method.invoke(Method.java:498)atorg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)atorg.apache.spark.deploy.SparkSubmit.orgapache s p a r k spark sparkdeploy S p a r k S u b m i t SparkSubmit SparkSubmit$runMain(SparkSubmit.scala:845)
at org.apache.spark.deploy.SparkSubmit.doRunMain 1 ( S p a r k S u b m i t . s c a l a : 161 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . s u b m i t ( S p a r k S u b m i t . s c a l a : 184 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t . d o S u b m i t ( S p a r k S u b m i t . s c a l a : 86 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 1(SparkSubmit.scala:161) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit 1(SparkSubmit.scala:161)atorg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)atorg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)atorg.apache.spark.deploy.SparkSubmit$anon 2. d o S u b m i t ( S p a r k S u b m i t . s c a l a : 920 ) a t o r g . a p a c h e . s p a r k . d e p l o y . S p a r k S u b m i t 2.doSubmit(SparkSubmit.scala:920) at org.apache.spark.deploy.SparkSubmit 2.doSubmit(SparkSubmit.scala:920)atorg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 6 in stage 16.0 failed 4 times, most recent failure: Lost task 6.3 in stage 16.0 (TID 478, idc-sql-dms-13, executor 40): ExecutorLostFailure (executor 40 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 11.8 GB of 11 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org a p a c h e apache apachespark s c h e d u l e r scheduler schedulerDAGScheduler f a i l J o b A n d I n d e p e n d e n t S t a g e s ( D A G S c h e d u l e r . s c a l a : 1925 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r failJobAndIndependentStages(DAGScheduler.scala:1925) at org.apache.spark.scheduler.DAGScheduler failJobAndIndependentStages(DAGScheduler.scala:1925)atorg.apache.spark.scheduler.DAGScheduleranonfun$abortStage 1. a p p l y ( D A G S c h e d u l e r . s c a l a : 1913 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r 1.apply(DAGScheduler.scala:1913) at org.apache.spark.scheduler.DAGScheduler 1.apply(DAGScheduler.scala:1913)atorg.apache.spark.scheduler.DAGScheduler a n o n f u n anonfun anonfunabortStage 1. a p p l y ( D A G S c h e d u l e r . s c a l a : 1912 ) a t s c a l a . c o l l e c t i o n . m u t a b l e . R e s i z a b l e A r r a y 1.apply(DAGScheduler.scala:1912) at scala.collection.mutable.ResizableArray 1.apply(DAGScheduler.scala:1912)atscala.collection.mutable.ResizableArrayclass.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1912)
at org.apache.spark.scheduler.DAGSchedulerKaTeX parse error: Can't use function '$' in math mode at position 8: anonfun$̲handleTaskSetFa…anonfun$handleTaskSetFailed 1. a p p l y ( D A G S c h e d u l e r . s c a l a : 948 ) a t s c a l a . O p t i o n . f o r e a c h ( O p t i o n . s c a l a : 257 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r . h a n d l e T a s k S e t F a i l e d ( D A G S c h e d u l e r . s c a l a : 948 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r E v e n t P r o c e s s L o o p . d o O n R e c e i v e ( D A G S c h e d u l e r . s c a l a : 2146 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r E v e n t P r o c e s s L o o p . o n R e c e i v e ( D A G S c h e d u l e r . s c a l a : 2095 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r E v e n t P r o c e s s L o o p . o n R e c e i v e ( D A G S c h e d u l e r . s c a l a : 2084 ) a t o r g . a p a c h e . s p a r k . u t i l . E v e n t L o o p 1.apply(DAGScheduler.scala:948) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:948) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2146) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2095) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2084) at org.apache.spark.util.EventLoop 1.apply(DAGScheduler.scala:948)atscala.Option.foreach(Option.scala:257)atorg.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:948)atorg.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2146)atorg.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2095)atorg.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2084)atorg.apache.spark.util.EventLoop$anon 1. r u n ( E v e n t L o o p . s c a l a : 49 ) a t o r g . a p a c h e . s p a r k . s c h e d u l e r . D A G S c h e d u l e r . r u n J o b ( D A G S c h e d u l e r . s c a l a : 759 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t . r u n J o b ( S p a r k C o n t e x t . s c a l a : 2061 ) a t o r g . a p a c h e . s p a r k . s q l . e x e c u t i o n . d a t a s o u r c e s . F i l e F o r m a t W r i t e r 1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:759) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at org.apache.spark.sql.execution.datasources.FileFormatWriter 1.run(EventLoop.scala:49)atorg.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:759)atorg.apache.spark.SparkContext.runJob(SparkContext.scala:2061)atorg.apache.spark.sql.execution.datasources.FileFormatWriter.write(FileFormatWriter.scala:167)
… 35 more

通过上面的日志,大概了解到任务失败的原因应该是内存超过限定。 “Container killed by YARN for exceeding memory limits. ”,解决问题的第一思路是 sql 能不能优化下,加内存属于下下策。

解决办法

先讲一下原来的 sql 思路:

SELECT a.name, a.age, b.alias
from a
left join (SELECT id, concat_ws(',', COLLECT_LIST(alias)) aliasfrom bbgroup by id
) b

这是一个很简单的逻辑,猜测问题应该出现在 collect_ws() 函数, 当 b 表根据 id 聚合的时候,如果大量的数据 加载到 list (COLLECT_LIST)里面,将导致内存耗尽。

解决思路 应该是先把重复数据去掉,再调用 concat_ws(’,’ , COLLECT_LIST(alias)),优化后的sql 如下:

SELECT a.name, a.age, b.alias
from a
left join (SELECT id, collect_ws(',', COLLECT_LIST(alias)) aliasfrom (SELECT id, aliasfrom bb group by id, alias)group by id
) b

还有更简单的一种写法,就是使用 COLLECT_SET 代替 COLLECT_LIST:

SELECT a.name, a.age, b.alias
from a
left join (SELECT id, collect_ws(',', COLLECT_SET(alias)) aliasfrom bbgroup by id
) b

哈哈,问题解决!

这篇关于一次spark sql 优化的经历: SparkException: Job aborted / spark.yarn.executor.memoryOverhead的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/296570

相关文章

MySQL中EXISTS与IN用法使用与对比分析

《MySQL中EXISTS与IN用法使用与对比分析》在MySQL中,EXISTS和IN都用于子查询中根据另一个查询的结果来过滤主查询的记录,本文将基于工作原理、效率和应用场景进行全面对比... 目录一、基本用法详解1. IN 运算符2. EXISTS 运算符二、EXISTS 与 IN 的选择策略三、性能对比

MySQL常用字符串函数示例和场景介绍

《MySQL常用字符串函数示例和场景介绍》MySQL提供了丰富的字符串函数帮助我们高效地对字符串进行处理、转换和分析,本文我将全面且深入地介绍MySQL常用的字符串函数,并结合具体示例和场景,帮你熟练... 目录一、字符串函数概述1.1 字符串函数的作用1.2 字符串函数分类二、字符串长度与统计函数2.1

SQL Server跟踪自动统计信息更新实战指南

《SQLServer跟踪自动统计信息更新实战指南》本文详解SQLServer自动统计信息更新的跟踪方法,推荐使用扩展事件实时捕获更新操作及详细信息,同时结合系统视图快速检查统计信息状态,重点强调修... 目录SQL Server 如何跟踪自动统计信息更新:深入解析与实战指南 核心跟踪方法1️⃣ 利用系统目录

MySQL 内存使用率常用分析语句

《MySQL内存使用率常用分析语句》用户整理了MySQL内存占用过高的分析方法,涵盖操作系统层确认及数据库层bufferpool、内存模块差值、线程状态、performance_schema性能数据... 目录一、 OS层二、 DB层1. 全局情况2. 内存占js用详情最近连续遇到mysql内存占用过高导致

Mysql中设计数据表的过程解析

《Mysql中设计数据表的过程解析》数据库约束通过NOTNULL、UNIQUE、DEFAULT、主键和外键等规则保障数据完整性,自动校验数据,减少人工错误,提升数据一致性和业务逻辑严谨性,本文介绍My... 目录1.引言2.NOT NULL——制定某列不可以存储NULL值2.UNIQUE——保证某一列的每一

解密SQL查询语句执行的过程

《解密SQL查询语句执行的过程》文章讲解了SQL语句的执行流程,涵盖解析、优化、执行三个核心阶段,并介绍执行计划查看方法EXPLAIN,同时提出性能优化技巧如合理使用索引、避免SELECT*、JOIN... 目录1. SQL语句的基本结构2. SQL语句的执行过程3. SQL语句的执行计划4. 常见的性能优

SQL Server 中的 WITH (NOLOCK) 示例详解

《SQLServer中的WITH(NOLOCK)示例详解》SQLServer中的WITH(NOLOCK)是一种表提示,等同于READUNCOMMITTED隔离级别,允许查询在不获取共享锁的情... 目录SQL Server 中的 WITH (NOLOCK) 详解一、WITH (NOLOCK) 的本质二、工作

MySQL 强制使用特定索引的操作

《MySQL强制使用特定索引的操作》MySQL可通过FORCEINDEX、USEINDEX等语法强制查询使用特定索引,但优化器可能不采纳,需结合EXPLAIN分析执行计划,避免性能下降,注意版本差异... 目录1. 使用FORCE INDEX语法2. 使用USE INDEX语法3. 使用IGNORE IND

SQL Server安装时候没有中文选项的解决方法

《SQLServer安装时候没有中文选项的解决方法》用户安装SQLServer时界面全英文,无中文选项,通过修改安装设置中的国家或地区为中文中国,重启安装程序后界面恢复中文,解决了问题,对SQLSe... 你是不是在安装SQL Server时候发现安装界面和别人不同,并且无论如何都没有中文选项?这个问题也

2025版mysql8.0.41 winx64 手动安装详细教程

《2025版mysql8.0.41winx64手动安装详细教程》本文指导Windows系统下MySQL安装配置,包含解压、设置环境变量、my.ini配置、初始化密码获取、服务安装与手动启动等步骤,... 目录一、下载安装包二、配置环境变量三、安装配置四、启动 mysql 服务,修改密码一、下载安装包安装地