将RDD转换为DataFrame Spark Streaming时出现ClassCastException

问题描述 投票:0回答:1

嗨伙计们,我有下一个问题。我正在使用带有Java的Apache Spark Streaming v1.6.0从IBM MQ获得一些消息。我为MQ制作了自定义接收器,但我遇到的问题是我需要将RDD从JavaDStream转换为DataFrame。为此,我使用foreachRDD迭代JavaDStream并且我定义了DataFrame的模式,但是当我运行作业时,第一个消息会抛出下一个异常:

java.lang.ClassCastException:org.apache.spark.rdd.BlockRDDPartition无法在org.apache.spark.rdd.ParallelCollectionRDD.compute(ParallelCollectionRDD.scala:102)中强制转换为org.apache.spark.rdd.ParallelCollectionPartition。 aplet.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)atg.apache.spark.rdd.RDD.iterator(RDD.scala:270)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask。 scala:66)atg.apache.spark.scheduler.Task.run(Task.scala:89)at org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:213)at java.util.concurrent .ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)19/03/28 12:53:26 WARN TaskSetManager:阶段0.0中失去的任务0.0(TID 0,localhost):java.lang.ClassCastException:org.apache.spark.rdd.BlockRDDPartition无法强制转换为org.apache.spark.rdd.ParallelCollectionPartition at org.apache.spark.rdd.Parallel收集RDD.compute(ParallelCollectionRDD.scala:102)位于org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)atg.apache.spark.rdd.RDD.iterator(RDD.scala:270)org .apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)atg.apache.spark.scheduler.Task.run(Task.scala:89)at org.apache.spark.executor.Executor $ TaskRunner.run (Executor.scala:213)位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)的java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:624),位于java.lang.Thread。运行(Thread.java:748)

然后代码执行得很好。即使我在MQ中没有任何消息,也只是当我运行de job时的第一条消息。

这是我的CustomMQReceiver

public CustomMQReceiver() {

        super(StorageLevel.MEMORY_ONLY_2());

    }

    @Override
    public void onStart() {

        new Thread() {
            @Override
            public void run() {
                try {
                    initConnection();
                    receive();
                } catch (JMSException ex) {
                    ex.printStackTrace();
                }
            }
        }.start();

    }

    @Override
    public void onStop() {

    }

    private void receive() {

        System.out.print("Started receiving messages from MQ");

        try {

            Message receivedMessage = null;

            while (!isStopped() && (receivedMessage = consumer.receiveNoWait()) != null) {

                String userInput = convertStreamToString(receivedMessage);
                System.out.println("Received data :'" + userInput + "'");
                store(userInput);
            }

            stop("No More Messages To read !");
            qCon.close();
            System.out.println("Queue Connection is Closed");

        } catch (Exception e) {
            e.printStackTrace();
            restart("Trying to connect again");
        } catch (Throwable t) {

            restart("Error receiving data", t);
        }

    }

    public void initConnection() throws JMSException {

        MQQueueConnectionFactory conFactory = new MQQueueConnectionFactory();
        conFactory.setHostName(HOST);
        conFactory.setPort(PORT);
        conFactory.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_CLIENT);
        conFactory.setQueueManager(QMGR);
        conFactory.setChannel(CHANNEL);
        conFactory.setBooleanProperty(WMQConstants.USER_AUTHENTICATION_MQCSP, true);
        conFactory.setStringProperty(WMQConstants.USERID, APP_USER);
        conFactory.setStringProperty(WMQConstants.PASSWORD, APP_PASSWORD);

        qCon = (MQQueueConnection) conFactory.createConnection();
        MQQueueSession qSession = (MQQueueSession) qCon.createQueueSession(false, 1);
        MQQueue queue = (MQQueue) qSession.createQueue(QUEUE_NAME);
        consumer = (MQMessageConsumer) qSession.createConsumer(queue);
        qCon.start();

    }

    @Override
    public StorageLevel storageLevel() {
        return StorageLevel.MEMORY_ONLY_2();
    }

    private static String convertStreamToString(final Message jmsMsg) throws Exception {

        String stringMessage = "";
        JMSTextMessage msg = (JMSTextMessage) jmsMsg;
        stringMessage = msg.getText();

        return stringMessage;
    }

这是我的火花代码

SparkConf sparkConf = new SparkConf()
                    .setAppName("MQStreaming")
                    .set("spark.driver.allowMultipleContexts", "true")
                    .setMaster("local[*]");

            JavaSparkContext jsc = new JavaSparkContext(sparkConf);
            final SQLContext sqlContext = new SQLContext(jsc);
            JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, new Duration(Long.parseLong(propertiesConf.getProperty("duration"))));

            JavaDStream<String> customReceiverStream = ssc.receiverStream(new CustomMQReceiver());

            customReceiverStream.foreachRDD(new VoidFunction<JavaRDD<String>>() {

                @Override
                public void call(JavaRDD<String> rdd) throws Exception {

                    JavaRDD<Row> rddRow = rdd.map(new Function<String, Row>() {

                        @Override
                        public Row call(String v1) throws Exception {

                            return RowFactory.create(v1);

                        }

                    });

                    try {

                        StructType schema = new StructType(new StructField[]{
                            new StructField("trama", DataTypes.StringType, true, Metadata.empty())
                        });

                        DataFrame frame = sqlContext.createDataFrame(rddRow, schema);

                        if (frame.count() > 0) {
                            //Here is where the first messages throw the exception
                            frame.show();
                            frame.write().mode(SaveMode.Append).json("file:///C:/tmp/");

                        }

                    } catch (Exception ex) {

                        System.out.println(" INFO " + ex.getMessage());

                    }

                }

            });

            ssc.start();
            ssc.awaitTermination();

我无法更改spark的版本,因为这个作业将在一个带有spark 1.6的旧cloudera集群中运行。我不知道我做错了什么或者只是一个错误。救命!!!!

java apache-spark streaming spark-streaming
1个回答
0
投票

我解决了自己的问题,这个异常是由我如何创建SQLContext抛出的,正确的方法是使用JavaStreamingContext创建sqlContext

//JavaStreamingContext jsc = ...
SQLContext sqlContext = new SQLContext(jsc.sparkContext());
© www.soinside.com 2019 - 2024. All rights reserved.