apache spark - "java.io.IOException: Class not found" on long running Streaming application -


i getting exception below on long running spark streaming application. exception occur after few minutes, may may not happen days. pretty consistent input data.

i have seen this jira ticket don't think same issue. java.lang.illegalargumentexception , java.io.ioexception: class not found.

my application streaming data , writing parquet using spark sql.

i using spark 1.5.2. ideas?

28-01-2016 09:36:00 error jobscheduler:96 - error generating jobs time 1453973760000 ms java.io.ioexception: class not found         @ com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.classreader.a(unknown source)         @ com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.classreader.<init>(unknown source)         @ org.apache.spark.util.closurecleaner$.getclassreader(closurecleaner.scala:40)         @ org.apache.spark.util.closurecleaner$.getinnerclosureclasses(closurecleaner.scala:81)         @ org.apache.spark.util.closurecleaner$.org$apache$spark$util$closurecleaner$$clean(closurecleaner.scala:187)         @ org.apache.spark.util.closurecleaner$.clean(closurecleaner.scala:122)         @ org.apache.spark.sparkcontext.clean(sparkcontext.scala:2032)         @ org.apache.spark.rdd.rdd$$anonfun$map$1.apply(rdd.scala:318)         @ org.apache.spark.rdd.rdd$$anonfun$map$1.apply(rdd.scala:317)         @ org.apache.spark.rdd.rddoperationscope$.withscope(rddoperationscope.scala:147)         @ org.apache.spark.rdd.rddoperationscope$.withscope(rddoperationscope.scala:108)         @ org.apache.spark.rdd.rdd.withscope(rdd.scala:310)         @ org.apache.spark.rdd.rdd.map(rdd.scala:317)         @ org.apache.spark.streaming.dstream.mappeddstream$$anonfun$compute$1.apply(mappeddstream.scala:35)         @ org.apache.spark.streaming.dstream.mappeddstream$$anonfun$compute$1.apply(mappeddstream.scala:35)         @ scala.option.map(option.scala:145)         @ org.apache.spark.streaming.dstream.mappeddstream.compute(mappeddstream.scala:35)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1$$anonfun$apply$7.apply(dstream.scala:350)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1$$anonfun$apply$7.apply(dstream.scala:350)         @ scala.util.dynamicvariable.withvalue(dynamicvariable.scala:57)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1.apply(dstream.scala:349)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1.apply(dstream.scala:349)         @ org.apache.spark.streaming.dstream.dstream.createrddwithlocalproperties(dstream.scala:399)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1.apply(dstream.scala:344)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1.apply(dstream.scala:342)         @ scala.option.orelse(option.scala:257)         @ org.apache.spark.streaming.dstream.dstream.getorcompute(dstream.scala:339)         @ org.apache.spark.streaming.dstream.filtereddstream.compute(filtereddstream.scala:35)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1$$anonfun$apply$7.apply(dstream.scala:350)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1$$anonfun$apply$7.apply(dstream.scala:350)         @ scala.util.dynamicvariable.withvalue(dynamicvariable.scala:57)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1.apply(dstream.scala:349)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1.apply(dstream.scala:349)         @ org.apache.spark.streaming.dstream.dstream.createrddwithlocalproperties(dstream.scala:399)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1.apply(dstream.scala:344)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1.apply(dstream.scala:342)         @ scala.option.orelse(option.scala:257)         @ org.apache.spark.streaming.dstream.dstream.getorcompute(dstream.scala:339)         @ org.apache.spark.streaming.dstream.mappeddstream.compute(mappeddstream.scala:35)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1$$anonfun$apply$7.apply(dstream.scala:350)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1$$anonfun$apply$7.apply(dstream.scala:350)         @ scala.util.dynamicvariable.withvalue(dynamicvariable.scala:57)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1.apply(dstream.scala:349)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1$$anonfun$1.apply(dstream.scala:349)         @ org.apache.spark.streaming.dstream.dstream.createrddwithlocalproperties(dstream.scala:399)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1.apply(dstream.scala:344)         @ org.apache.spark.streaming.dstream.dstream$$anonfun$getorcompute$1.apply(dstream.scala:342)         @ scala.option.orelse(option.scala:257)         @ org.apache.spark.streaming.dstream.dstream.getorcompute(dstream.scala:339)         @ org.apache.spark.streaming.dstream.foreachdstream.generatejob(foreachdstream.scala:38)         @ org.apache.spark.streaming.dstreamgraph$$anonfun$1.apply(dstreamgraph.scala:120)         @ org.apache.spark.streaming.dstreamgraph$$anonfun$1.apply(dstreamgraph.scala:120)         @ scala.collection.traversablelike$$anonfun$flatmap$1.apply(traversablelike.scala:251)         @ scala.collection.traversablelike$$anonfun$flatmap$1.apply(traversablelike.scala:251)         @ scala.collection.mutable.resizablearray$class.foreach(resizablearray.scala:59)         @ scala.collection.mutable.arraybuffer.foreach(arraybuffer.scala:47)         @ scala.collection.traversablelike$class.flatmap(traversablelike.scala:251)         @ scala.collection.abstracttraversable.flatmap(traversable.scala:105)         @ org.apache.spark.streaming.dstreamgraph.generatejobs(dstreamgraph.scala:120)         @ org.apache.spark.streaming.scheduler.jobgenerator$$anonfun$2.apply(jobgenerator.scala:247)         @ org.apache.spark.streaming.scheduler.jobgenerator$$anonfun$2.apply(jobgenerator.scala:245)         @ scala.util.try$.apply(try.scala:161)         @ org.apache.spark.streaming.scheduler.jobgenerator.generatejobs(jobgenerator.scala:245)         @ org.apache.spark.streaming.scheduler.jobgenerator.org$apache$spark$streaming$scheduler$jobgenerator$$processevent(jobgenerator.scala:181)         @ org.apache.spark.streaming.scheduler.jobgenerator$$anon$1.onreceive(jobgenerator.scala:87)         @ org.apache.spark.streaming.scheduler.jobgenerator$$anon$1.onreceive(jobgenerator.scala:86)         @ org.apache.spark.util.eventloop$$anon$1.run(eventloop.scala:48) 

i going post answer own question. think happens when start streaming app , remove or replace jar file used in spark-submit. running jvm spark driver application tries load classes jar file no longer there, or has been replaced.

i don't know true see interest in question think post current thinking.


Comments

Popular posts from this blog

Hatching array of circles in AutoCAD using c# -

ios - UITEXTFIELD InputView Uipicker not working in swift -

Python Pig Latin Translator -