从 Apche Spark 读取 AWS DynamoDb 记录始终返回空数据集

问题描述 投票:0回答:1

我正在关注这篇文章,我想在我的 Spark 作业中从 dynamodb 读取数据。

问题是我从 dynamo db 读取的数据集始终为空。

我知道这一点,因为这个语句:

System.out.println("Citations count: " + citations.count());
打印零,但表本身不为空。它包含 12,801 项。我在 aws 控制台中检查过。

这是我的代码:

import com.amazonaws.services.dynamodbv2.model.AttributeValue;
import com.lcarvalho.sparkddb.config.SparkConfiguration;
import java.util.Arrays;
import java.util.Map;
import org.apache.hadoop.dynamodb.DynamoDBItemWritable;
import org.apache.hadoop.dynamodb.read.DynamoDBInputFormat;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.log4j.Level;
import org.apache.log4j.LogManager;
import org.apache.log4j.Logger;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;

public class Covid19CitationsWordCount {

    private static final String APP_NAME = "Covid19CitationsWordCount";
    private static Logger LOGGER = LogManager.getLogger(Covid19CitationsWordCount.class);

    public static void main(String[] args) throws Exception {

        Logger.getLogger("org").setLevel(Level.ERROR);

        // Building the Spark session
        JavaSparkContext sparkContext = SparkConfiguration.buildSparkContext(APP_NAME);
        JobConf jobConf = SparkConfiguration.buildJobConf(sparkContext, Boolean.FALSE);
        // Building a RDD pointing to the DynamoDB table
        JavaPairRDD<Text, DynamoDBItemWritable> citations =
            sparkContext.hadoopRDD(jobConf, DynamoDBInputFormat.class, Text.class, DynamoDBItemWritable.class);
        System.out.println("Citations count: " + citations.count());

        // Filtering citations published in 2020
        JavaPairRDD<Text, DynamoDBItemWritable> filteredCitations = citations.filter(citation -> {
            DynamoDBItemWritable item = citation._2();
            Map<String, AttributeValue> attributes = item.getItem();
            return attributes.get("publishedYear") == null ? Boolean.FALSE : attributes.get("publishedYear").getN().equals("2020");
        });
        System.out.println("Filtered citations count: " + filteredCitations.count());

        // Building a RDD with the citations titles
        JavaRDD citationsTitles = filteredCitations.map(citation -> {
            DynamoDBItemWritable item = citation._2();
            Map<String, AttributeValue> attributes = item.getItem();
            return attributes.get("title").getS();
        });

        // Building a RDD with the citations titles words
        JavaRDD<String> citationsTitlesWords = citationsTitles.flatMap(citationTitle -> Arrays
            .asList(citationTitle.toString().split(" ")).iterator());
        System.out.println("Citations titles word count: " + citationsTitlesWords.count());

        // Counting the number of times each word was used
        Map<String, Long> wordCounts = citationsTitlesWords.countByValue();
        System.out.println("Citations titles distinct word count: " + wordCounts.size());

        for (String word : wordCounts.keySet()) {
            System.out.println("Word: " + word + " - Count: " + wordCounts.get(word));
        }
    }
}


import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import org.apache.hadoop.mapred.JobConf;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;

public class SparkConfiguration {

    public static JobConf buildJobConf(JavaSparkContext javaSparkContext, final boolean useDefaultAWSCredentials) {

        final JobConf jobConf = new JobConf(javaSparkContext.hadoopConfiguration());
        jobConf.set("dynamodb.servicename", "dynamodb");
        jobConf.set("dynamodb.input.tableName", "Covid19Citation");
        jobConf.set("dynamodb.output.tableName", "Covid19Citation");
        jobConf.set("dynamodb.endpoint", "dynamodb.us-east-1.amazonaws.com");
        jobConf.set("dynamodb.regionid", "us-east-1");
        jobConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat");
        jobConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat");

        if (useDefaultAWSCredentials) {
            DefaultAWSCredentialsProviderChain defaultAWSCredentialsProviderChain = new DefaultAWSCredentialsProviderChain();
            jobConf.set("dynamodb.awsAccessKeyId", defaultAWSCredentialsProviderChain.getCredentials().getAWSAccessKeyId());
            jobConf.set("dynamodb.awsSecretAccessKey", defaultAWSCredentialsProviderChain.getCredentials().getAWSSecretKey());
        }
        return jobConf;
    }

    public static JavaSparkContext buildSparkContext(String application) throws ClassNotFoundException {
        SparkConf conf = new SparkConf()
                .setAppName(application)
                .registerKryoClasses(new Class<?>[]{
                        Class.forName("org.apache.hadoop.io.Text"),
                        Class.forName("org.apache.hadoop.dynamodb.DynamoDBItemWritable")
                });
        return new JavaSparkContext(conf);
    }

    public static JavaSparkContext buildLocalSparkContext(String application) throws ClassNotFoundException {
        SparkConf conf = new SparkConf()
                .setMaster("local[2]")
                .setAppName(application)
                .registerKryoClasses(new Class<?>[]{
                    Class.forName("org.apache.hadoop.io.Text"),
                    Class.forName("org.apache.hadoop.dynamodb.DynamoDBItemWritable")
                });
        return new JavaSparkContext(conf);
    }
}

到目前为止我尝试过的:

  1. 我仔细检查了表名,表名是正确的。
  2. 我检查了提供 awsAccessKeyId 和 awsSecretAccesKey 的用户所拥有的权限,它具有完全的管理员权限。

我不知道如何解决此问题。任何帮助将不胜感激。

注意:Spark 作业完成且没有任何错误,标准输出日志如下所示:

Citations count: 0
Filtered citations count: 0
Citations titles word count: 0
Citations titles distinct word count: 0

这是标准错误日志:

23/06/11 22:07:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/06/11 22:07:02 WARN DependencyUtils: Skip remote jar s3://sitemapgenerationscheduledjob/jars/Covid19CitationsWordCount-1.0.jar.
23/06/11 22:07:02 INFO DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at ip-172-31-11-132.ec2.internal/172.31.11.132:8032
23/06/11 22:07:03 INFO Configuration: resource-types.xml not found
23/06/11 22:07:03 INFO ResourceUtils: Unable to find 'resource-types.xml'.
23/06/11 22:07:03 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (12288 MB per container)
23/06/11 22:07:03 INFO Client: Will allocate AM container, with 2432 MB memory including 384 MB overhead
23/06/11 22:07:03 INFO Client: Setting up container launch context for our AM
23/06/11 22:07:03 INFO Client: Setting up the launch environment for our AM container
23/06/11 22:07:03 INFO Client: Preparing resources for our AM container
23/06/11 22:07:03 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
23/06/11 22:07:06 INFO Client: Uploading resource file:/mnt/tmp/spark-478c0486-9bbc-413f-896a-13839434207e/__spark_libs__4221092772806684590.zip -> hdfs://ip-172-31-11-132.ec2.internal:8020/user/hadoop/.sparkStaging/application_1686521110686_0001/__spark_libs__4221092772806684590.zip
23/06/11 22:07:07 INFO Client: Uploading resource s3://sitemapgenerationscheduledjob/jars/Covid19CitationsWordCount-1.0.jar -> hdfs://ip-172-31-11-132.ec2.internal:8020/user/hadoop/.sparkStaging/application_1686521110686_0001/Covid19CitationsWordCount-1.0.jar
23/06/11 22:07:07 INFO S3NativeFileSystem: Opening 's3://sitemapgenerationscheduledjob/jars/Covid19CitationsWordCount-1.0.jar' for reading
23/06/11 22:07:09 INFO Client: Uploading resource file:/etc/hudi/conf.dist/hudi-defaults.conf -> hdfs://ip-172-31-11-132.ec2.internal:8020/user/hadoop/.sparkStaging/application_1686521110686_0001/hudi-defaults.conf
23/06/11 22:07:10 INFO Client: Uploading resource file:/mnt/tmp/spark-478c0486-9bbc-413f-896a-13839434207e/__spark_conf__4224384365241794246.zip -> hdfs://ip-172-31-11-132.ec2.internal:8020/user/hadoop/.sparkStaging/application_1686521110686_0001/__spark_conf__.zip
23/06/11 22:07:10 INFO SecurityManager: Changing view acls to: hadoop
23/06/11 22:07:10 INFO SecurityManager: Changing modify acls to: hadoop
23/06/11 22:07:10 INFO SecurityManager: Changing view acls groups to: 
23/06/11 22:07:10 INFO SecurityManager: Changing modify acls groups to: 
23/06/11 22:07:10 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
23/06/11 22:07:10 INFO Client: Submitting application application_1686521110686_0001 to ResourceManager
23/06/11 22:07:10 INFO YarnClientImpl: Submitted application application_1686521110686_0001
23/06/11 22:07:11 INFO Client: Application report for application_1686521110686_0001 (state: ACCEPTED)
23/06/11 22:07:11 INFO Client: 
     client token: N/A
     diagnostics: AM container is launched, waiting for AM container to Register with RM
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1686521230409
     final status: UNDEFINED
     tracking URL: http://ip-172-31-11-132.ec2.internal:20888/proxy/application_1686521110686_0001/
     user: hadoop
23/06/11 22:07:12 INFO Client: Application report for application_1686521110686_0001 (state: ACCEPTED)
23/06/11 22:07:13 INFO Client: Application report for application_1686521110686_0001 (state: ACCEPTED)
23/06/11 22:07:14 INFO Client: Application report for application_1686521110686_0001 (state: ACCEPTED)
23/06/11 22:07:15 INFO Client: Application report for application_1686521110686_0001 (state: ACCEPTED)
23/06/11 22:07:16 INFO Client: Application report for application_1686521110686_0001 (state: ACCEPTED)
23/06/11 22:07:17 INFO Client: Application report for application_1686521110686_0001 (state: ACCEPTED)
23/06/11 22:07:18 INFO Client: Application report for application_1686521110686_0001 (state: ACCEPTED)
23/06/11 22:07:19 INFO Client: Application report for application_1686521110686_0001 (state: RUNNING)
23/06/11 22:07:19 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: ip-172-31-5-161.ec2.internal
     ApplicationMaster RPC port: 41187
     queue: default
     start time: 1686521230409
     final status: UNDEFINED
     tracking URL: http://ip-172-31-11-132.ec2.internal:20888/proxy/application_1686521110686_0001/
     user: hadoop
23/06/11 22:07:20 INFO Client: Application report for application_1686521110686_0001 (state: RUNNING)
23/06/11 22:07:21 INFO Client: Application report for application_1686521110686_0001 (state: FINISHED)
23/06/11 22:07:21 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: ip-172-31-5-161.ec2.internal
     ApplicationMaster RPC port: 41187
     queue: default
     start time: 1686521230409
     final status: SUCCEEDED
     tracking URL: http://ip-172-31-11-132.ec2.internal:20888/proxy/application_1686521110686_0001/
     user: hadoop
23/06/11 22:07:21 INFO ShutdownHookManager: Shutdown hook called
23/06/11 22:07:22 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-478c0486-9bbc-413f-896a-13839434207e
23/06/11 22:07:22 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-bdd3e76a-21a7-4b42-a853-3523f5233ace
Command exiting with ret '0'
apache-spark amazon-dynamodb amazon-emr
1个回答
0
投票

我遇到了同样的问题,直到今天我发现了 Amazon EMR 的发行说明:

https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-680-release.html

它在已知问题下指出:

当您在 Amazon EMR 版本 6.6.0、6.7.0 和 6.8.0 上将 DynamoDB 连接器与 Spark 结合使用时,表中的所有读取都会返回空结果,即使输入拆分引用非空数据也是如此。这是因为 Spark 3.2.0 默认将

spark.hadoopRDD.ignoreEmptySplits
设置为
true
。作为解决方法,请将
spark.hadoopRDD.ignoreEmptySplits
显式设置为
false
。 Amazon EMR 版本 6.9.0 修复了此问题。

如果您使用 EMR,请检查您的版本并设置配置值。

© www.soinside.com 2019 - 2024. All rights reserved.