我试图建立一个可以sparkjava.com框架我的Apache火花的工作网络API。我的代码是:
@Override
public void init() {
get("/hello",
(req, res) -> {
String sourcePath = "hdfs://spark:54310/input/*";
SparkConf conf = new SparkConf().setAppName("LineCount");
conf.setJars(new String[] { "/home/sam/resin-4.0.42/webapps/test.war" });
File configFile = new File("config.properties");
String sparkURI = "spark://hamrah:7077";
conf.setMaster(sparkURI);
conf.set("spark.driver.allowMultipleContexts", "true");
JavaSparkContext sc = new JavaSparkContext(conf);
@SuppressWarnings("resource")
JavaRDD<String> log = sc.textFile(sourcePath);
JavaRDD<String> lines = log.filter(x -> {
return true;
});
return lines.count();
});
}
如果我删除lambda表达式或把它简单的罐子,而不是Web服务(不知何故一个servlet)内将没有任何错误运行。但使用servlet内的lambda表达式将导致此异常:
15/01/28 10:36:33 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, hamrah): java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.spark.api.java.JavaRDD$$anonfun$filter$1.f$1 of type org.apache.spark.api.java.function.Function in instance of org.apache.spark.api.java.JavaRDD$$anonfun$filter$1
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1999)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
at org.apache.spark.scheduler.Task.run(Task.scala:56)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
P.S:我想球衣和javaspark与码头,Tomcat和树脂和它们的所有组合使我相同的结果。
你在这里是什么,是一个后续的错误,掩盖了原来的错误。
当拉姆达实例是系列化,他们使用writeReplace
以溶解持久的形式,其为SerializedLambda
例如他们的JRE具体实施。当SerializedLambda
实例已经恢复,其readResolve
方法将被调用来重建适当的拉姆达实例。由于文件说,它会通过调用该定义的原始拉姆达类的特殊方法(见this answer)这样做。重要的一点是需要的原始类,这就是缺少了什么,你的情况。
但是,还有的ObjectInputStream
的... ...特殊行为。当它遇到一个例外,它不会马上摆脱困境。它会记录异常和持续的过程,标志着所有的对象目前正在阅读,从而根据错误的对象是错误的,以及上。只有在过程结束时,将它扔遇到的原始异常。是什么使得它如此奇怪的是,它也将继续尝试设置这些对象的字段。但是,当你看方法ObjectInputStream.readOrdinaryObject
线1806:
…
if (obj != null &&
handles.lookupException(passHandle) == null &&
desc.hasReadResolveMethod())
{
Object rep = desc.invokeReadResolve(obj);
if (unshared && rep.getClass().isArray()) {
rep = cloneArray(rep);
}
if (rep != obj) {
handles.setObject(passHandle, obj = rep);
}
}
return obj;
}
你看,这不叫readResolve
方法时lookupException
报告非null
例外。但是,当替换并没有发生,这不是一个好主意,继续尝试设置引荐的字段值,但是这正是在这里发生,因此产生ClassCastException
。
您可以轻松地重现该问题:
public class Holder implements Serializable {
Runnable r;
}
public class Defining {
public static Holder get() {
final Holder holder = new Holder();
holder.r=(Runnable&Serializable)()->{};
return holder;
}
}
public class Writing {
static final File f=new File(System.getProperty("java.io.tmpdir"), "x.ser");
public static void main(String... arg) throws IOException {
try(FileOutputStream os=new FileOutputStream(f);
ObjectOutputStream oos=new ObjectOutputStream(os)) {
oos.writeObject(Defining.get());
}
System.out.println("written to "+f);
}
}
public class Reading {
static final File f=new File(System.getProperty("java.io.tmpdir"), "x.ser");
public static void main(String... arg) throws IOException, ClassNotFoundException {
try(FileInputStream is=new FileInputStream(f);
ObjectInputStream ois=new ObjectInputStream(is)) {
Holder h=(Holder)ois.readObject();
System.out.println(h.r);
h.r.run();
}
System.out.println("read from "+f);
}
}
编译这四个类和运行Writing
。然后删除类文件Defining.class
和运行Reading
。然后你会得到一个
Exception in thread "main" java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field test.Holder.r of type java.lang.Runnable in instance of test.Holder
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
(测试与1.8.0_20)
底线是,你可能对这个序列化问题忘记一旦明白发生了什么,你有解决你的问题做的是确保其定义的lambda表达式类也可在拉姆达是运行时反序列化。
举例星火工作直接从IDE(火花提交默认分配JAR)运行:
SparkConf sconf = new SparkConf()
.set("spark.eventLog.dir", "hdfs://nn:8020/user/spark/applicationHistory")
.set("spark.eventLog.enabled", "true")
.setJars(new String[]{"/path/to/jar/with/your/class.jar"})
.setMaster("spark://spark.standalone.uri:7077");
我想你的问题是失败的自动装箱。在代码
x -> {
return true;
}
你传递(String->boolean
)拉姆达(这是Predicate<String>
),而filter method需要(String->Boolean
)拉姆达(这是Function<String,Boolean>
)。所以,我给你改代码
x -> {
return Boolean.TRUE;
}
包括细节到你的问题吧。从uname -a
和java -version
输出理解。如果可能的话,提供sscce。
我有同样的错误,我更换了拉姆达与一个内部类,然后它的工作。我真的不明白为什么,和重放这个错误是非常困难的(我们有可观察到的行为,和其他地方一台服务器)。
原因序列化问题(使用lambda表达式,导致SerializedLambda
错误)
this.variable = () -> { ..... }
产量java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field MyObject.val$variable
作品
this.variable = new MyInterface() {
public void myMethod() {
.....
}
};
你也许可以更简单地用qazxsw POI更换您的Java 8拉姆达
更换
spark.scala.Function
有:
output = rdds.map(x->this.function(x)).collect()