使用spark时无法应用gpfdist协议

问题描述 投票:0回答:2

我正在尝试使用spark从greenplum读取数据到HDFS。为此,我使用jar文件:greenplum-spark_2.11-1.6.0.jar

应用spark.read如下:

val yearDF = spark.read.format("io.pivotal.greenplum.spark.GreenplumRelationProvider").option("url", "jdbc:postgresql://1.2.3.166:5432/finance?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory").option("server.port","8020").option("dbtable", "tablename").option("dbschema","schema").option("user", "123415").option("password", "etl_123").option("partitionColumn","je_id").option("partitions",3).load().where("period_year=2017 and period_num=12 and source_system_name='SSS'").select(splitSeq map col:_*).withColumn("flagCol", lit(0))
yearDF.write.format("csv").save("hdfs://dev/apps/hive/warehouse/header_test_data/")

当我运行上面的代码时,我得到了异常:

Exception in thread "qtp1438055710-505" java.lang.OutOfMemoryError: GC overhead limit exceeded
19/03/05 12:29:08 WARN QueuedThreadPool:
java.lang.OutOfMemoryError: GC overhead limit exceeded
19/03/05 12:29:08 WARN QueuedThreadPool: Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$3@16273740 in qtp1438055710{STARTED,8<=103<=200,i=19,q=0}
19/03/05 12:36:03 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 8)
org.postgresql.util.PSQLException: ERROR: error when writing data to gpfdist http://1.2.3.8:8020/spark_6ca7d983d07129f2_db5510e67a8a6f78_driver_370, quit after 2 tries (url_curl.c:584)  (seg7 ip-1-3-3-196.ec2.internal:40003 pid=4062) (cdbdisp.c:1322)
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2310)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2023)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:217)
    at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:421)
    at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:318)
    at org.postgresql.jdbc.PgStatement.executeUpdate(PgStatement.java:294)
    at com.zaxxer.hikari.pool.ProxyStatement.executeUpdate(ProxyStatement.java:120)
    at com.zaxxer.hikari.pool.HikariProxyStatement.executeUpdate(HikariProxyStatement.java)
    at io.pivotal.greenplum.spark.jdbc.Jdbc$$anonfun$2.apply(Jdbc.scala:81)
    at io.pivotal.greenplum.spark.jdbc.Jdbc$$anonfun$2.apply(Jdbc.scala:79)
    at resource.AbstractManagedResource$$anonfun$5.apply(AbstractManagedResource.scala:88)
    at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:125)
    at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:125)
    at scala.util.control.Exception$Catch.apply(Exception.scala:103)
    at scala.util.control.Exception$Catch.either(Exception.scala:125)
    at resource.AbstractManagedResource.acquireFor(AbstractManagedResource.scala:88)
    at resource.ManagedResourceOperations$class.apply(ManagedResourceOperations.scala:26)
    at resource.AbstractManagedResource.apply(AbstractManagedResource.scala:50)
    at resource.DeferredExtractableManagedResource$$anonfun$tried$1.apply(AbstractManagedResource.scala:33)
    at scala.util.Try$.apply(Try.scala:192)
    at resource.DeferredExtractableManagedResource.tried(AbstractManagedResource.scala:33)
    at io.pivotal.greenplum.spark.jdbc.Jdbc$.copyTable(Jdbc.scala:83)
    at io.pivotal.greenplum.spark.externaltable.GreenplumRowIterator.liftedTree1$1(GreenplumRowIterator.scala:105)
    at io.pivotal.greenplum.spark.externaltable.GreenplumRowIterator.<init>(GreenplumRowIterator.scala:104)
    at io.pivotal.greenplum.spark.GreenplumRDD.compute(GreenplumRDD.scala:49)

正如他们在官方documentation中提到的那样,我应用了这些步骤

早些时候,我使用了jar:greenplum.jar工作正常,但速度较慢,因为它通过GP Master提取数据。 jar:greenplum-spark_2.11-1.6.0.jar是一个连接器jar,它使用gpfdist协议将数据提取到HDFS。

此外,IP地址也会在异常消息中更改。你可以看到IP 1.2.3.166:5432成为1.2.3.8:8020seg7 ip-1-3-3-196.ec2.internal:40003 pid=4062

使用相同数量的执行程序和执行程序内存,我可以使用greenplum.jar检索数据。但保持一切相同,只是将罐子改为greenplum-spark_2.11-1.6.0.jar只是为了面对这个例外。我一直试图解决这个问题,但我根本不理解这个现象。任何人都可以告诉我如何解决这个问题?

apache-spark greenplum
2个回答
0
投票

Greenplum-Spark连接器旨在并行化Greenplum段和Spark工作者之间的数据传输。为了充分利用并行数据传输,您必须提供足够的内存和spark工作人员,以加快数据传输速度。否则,您可以使用greenplum.jar使用单个JDBC连接器通过单个Greenplum master将数据从HDFS加载到Greenplum数据库中。当您将数据加载到单个Greenplum主数据库时,它会明显变慢。

一些注意事项: - 取决于Greenplum段的数量,您是否有足够的Spark工作人员/执行者来接收或发送Spark和Greenplum集群之间的数据? - 取决于分配给Spark工作人员/执行人员的内存。指“Tuning Spark”文件

从错误日志中带有此消息“java.lang.OutOfMemoryError:GC开销超出限制”,我可以假设您的spark worker / executors没有足够的内存。您仍然需要调整Spark工作程序,以便它可以并行化HDFS的数据加载。


0
投票

你能增加分区数吗?根据表的大小,您可能需要增加分区数。您可以尝试将分区数增加到30,看看是否仍然遇到内存不足问题?

val yearDF = spark.read.format("io.pivotal.greenplum.spark.GreenplumRelationProvider").option("url", "jdbc:postgresql://1.2.3.166:5432/finance?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory").option("server.port","8020").option("dbtable", "tablename").option("dbschema","schema").option("user", "123415").option("password", "etl_123").option("partitionColumn","je_id").option("partitions",30).load().where("period_year=2017 and period_num=12 and source_system_name='SSS'").select(splitSeq map col:_*).withColumn("flagCol", lit(0))
yearDF.write.format("csv").save("hdfs://dev/apps/hive/warehouse/header_test_data/")
© www.soinside.com 2019 - 2024. All rights reserved.