在Spark应用程序中使用JDBC

问题描述 投票:0回答:1

我写了一个Spark应用程序来批量加载Phoenix Table。现在一切都工作了几个星期,但是有几天我遇到了一些重复行的问题。这是由错误的表统计引起的。但是,可能的解决方法是删除并重新生成此表的统计信息。

因此,我需要打开与Phoenix数据库的JDBC连接,并调用语句来删除和创建统计信息。

因为我需要在通过Spark声明新数据之后执行此操作,所以我还想在执行表批量加载之后在我的Spark作业中创建和使用此JDBC连接。

为此,我添加了以下方法,并在我的Java代码中的dataframe.save()和sparkContext.close()方法之间调用它:

private static void updatePhoenixTableStatistics(String phoenixTableName) {
        try {
            Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
            System.out.println("Connecting to database..");
            Connection conn = DriverManager.getConnection("jdbc:phoenix:my-server.net:2181:/hbase-unsecure");
            System.out.println("Creating statement...");
            Statement st = conn.createStatement();

            st.executeUpdate("DELETE FROM SYSTEM.STATS WHERE physical_name='" + phoenixTableName + "'");
            System.out.println("Successfully deleted statistics data... Now refreshing it.");

            st.executeUpdate("UPDATE STATISTICS " + phoenixTableName + " ALL");
            System.out.println("Successfully refreshed statistics data.");

            st.close();
            conn.close();

            System.out.println("Connection closed.");
        } catch (Exception e) {
            System.out.println("Unable to update table statistics - Skipping this step!");
            e.printStackTrace();
        }
    }

问题是,因为我添加了这个方法,所以我总是在Spark作业结束时得到以下异常:

Bulk-Load: DataFrame.save() completed - Import finished successfully!
Updating Table Statistics:
Connecting to database..
Creating statement...
Successfully deleted statistics data... Now refreshing it.
Successfully refreshed statistics data.
Connection closed.
Exception in thread "Thread-31" java.lang.RuntimeException: java.io.FileNotFoundException: /tmp/spark-e5b01508-0f84-4702-9684-4f6ceac803f9/gk-journal-importer-phoenix-0.0.3h.jar (No such file or directory)
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2794)
        at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2646)
        at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2518)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:1065)
        at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1119)
        at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1520)
        at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:68)
        at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:82)
        at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:97)
        at org.apache.phoenix.query.ConfigurationFactory$ConfigurationFactoryImpl$1.call(ConfigurationFactory.java:49)
        at org.apache.phoenix.query.ConfigurationFactory$ConfigurationFactoryImpl$1.call(ConfigurationFactory.java:46)
        at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
        at org.apache.phoenix.util.PhoenixContextExecutor.callWithoutPropagation(PhoenixContextExecutor.java:93)
        at org.apache.phoenix.query.ConfigurationFactory$ConfigurationFactoryImpl.getConfiguration(ConfigurationFactory.java:46)
        at org.apache.phoenix.jdbc.PhoenixDriver$1.run(PhoenixDriver.java:88)
Caused by: java.io.FileNotFoundException: /tmp/spark-e5b01508-0f84-4702-9684-4f6ceac803f9/gk-journal-importer-phoenix-0.0.3h.jar (No such file or directory)
        at java.util.zip.ZipFile.open(Native Method)
        at java.util.zip.ZipFile.<init>(ZipFile.java:225)
        at java.util.zip.ZipFile.<init>(ZipFile.java:155)
        at java.util.jar.JarFile.<init>(JarFile.java:166)
        at java.util.jar.JarFile.<init>(JarFile.java:103)
        at sun.net.www.protocol.jar.URLJarFile.<init>(URLJarFile.java:93)
        at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69)
        at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:99)
        at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122)
        at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:152)
        at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2612)
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2693)
        ... 14 more

有人知道关于这个问题,可以帮忙吗?如何在Spark作业中使用JDBC?或者还有另一种可能吗?

我正在安装Spark 2.3和Phoenix 4.7的HDP 2.6.5。谢谢你的帮助!

apache-spark jdbc hortonworks-data-platform phoenix hdp
1个回答
0
投票

我找到了解决问题的方法。我导出的jar包括phoenix-spark2和phoenix-client依赖项,并包含在我的jar文件中。

我将这些依赖项(因为它们已存在于我的集群HDP安装中)更改为提供范围:

<dependency>
    <groupId>org.apache.phoenix</groupId>
    <artifactId>phoenix-spark2</artifactId>
    <version>4.7.0.2.6.5.0-292</version>
    <scope>provided</scope>                          <!-- this did it, now have to add --jar to spark-submit -->
</dependency>
<dependency>
    <groupId>org.apache.phoenix</groupId>
    <artifactId>phoenix-core</artifactId>
    <version>4.7.0.2.6.5.0-292</version>
    <scope>provided</scope>                          <!-- this did it, now have to add --jar to spark-submit -->
</dependency>

现在我使用--jars选项启动我的Spark作业并在那里链接这些依赖项。现在它在纱线客户端模式下工作正常。

spark-submit --class spark.dataimport.SparkImportApp --master yarn --deploy-mode client --jars /usr/hdp/current/phoenix-client/phoenix-spark2.jar,/usr/hdp/current/phoenix-client/phoenix-client.jar hdfs:/user/test/gk-journal-importer-phoenix-0.0.3h.jar <some parameters for the main method>

PS:在纱线群集模式下,应用程序一直在工作(也包括包含依赖项的fat-jar)。

© www.soinside.com 2019 - 2024. All rights reserved.