使用Spark从同一区域的多个s3桶中读取

问题描述 投票:0回答:1

我正在尝试从多个s3存储桶中读取文件。

最初桶应该在不同的区域,但看起来这是不可能的。

所以现在我已经将另一个桶复制到与要读取的第一个桶相同的区域,这与我正在执行spark作业的区域相同。

SparkSession设置:

val sparkConf = new SparkConf()
          .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
          .registerKryoClasses(Array(classOf[Event]))

        SparkSession.builder
          .appName("Merge application")
          .config(sparkConf)
          .getOrCreate()

使用创建SparkSession中的SQLContext调用的函数:

private def parseEvents(bucketPath: String, service: String)(
    implicit sqlContext: SQLContext
  ): Try[RDD[Event]] =
    Try(
      sqlContext.read
        .option("codec", "org.apache.hadoop.io.compress.GzipCodec")
        .json(bucketPath)
        .toJSON
        .rdd
        .map(buildEvent(_, bucketPath, service).get)
    )

主流程:

for {
      bucketOnePath               <- buildBucketPath(config.bucketOne.name)
      _                           <- log(s"Reading events from $bucketOnePath")
      bucketOneEvents: RDD[Event] <- parseEvents(bucketOnePath, config.service)
      _                           <- log(s"Enriching events from $bucketOnePath with originating region data")
      bucketOneEventsWithRegion: RDD[Event] <- enrichEventsWithRegion(
        bucketOneEvents,
        config.bucketOne.region
      )

      bucketTwoPath               <- buildBucketPath(config.bucketTwo.name)
      _                           <- log(s"Reading events from $bucketTwoPath")
      bucketTwoEvents: RDD[Event] <- parseEvents(config.bucketTwo.name, config.service)
      _                           <- log(s"Enriching events from $bucketTwoPath with originating region data")
      bucketTwoEventsWithRegion: RDD[Event] <- enrichEventsWithRegion(
        bucketTwoEvents,
        config.bucketTwo.region
      )

      _                        <- log("Merging events")
      mergedEvents: RDD[Event] <- merge(bucketOneEventsWithRegion, bucketTwoEventsWithRegion)
      if mergedEvents.isEmpty() == false
      _ <- log("Grouping merged events by partition key")
      mergedEventsByPartitionKey: RDD[(EventsPartitionKey, Iterable[Event])] <- eventsByPartitionKey(
        mergedEvents
      )

      _ <- log(s"Storing merged events to ${config.outputBucket.name}")
      _ <- store(config.outputBucket.name, config.service, mergedEventsByPartitionKey)
    } yield ()

我在日志中得到的错误(实际存储桶名称已更改,但实际名称确实存在):

19/04/09 13:10:20 INFO SparkContext: Created broadcast 4 from rdd at MergeApp.scala:141
19/04/09 13:10:21 INFO FileSourceScanExec: Planning scan with bin packing, max size: 134217728 bytes, open cost is considered as scanning 4194304 bytes.
org.apache.spark.sql.AnalysisException: Path does not exist: hdfs:someBucket2

我的stdout日志显示主要代码在失败之前走了多远:

Reading events from s3://someBucket/*/*/*/*/*.gz
Enriching events from s3://someBucket/*/*/*/*/*.gz with originating region data
Reading events from s3://someBucket2/*/*/*/*/*.gz
Merge failed: Path does not exist: hdfs://someBucket2

奇怪的是,无论我选择哪个桶,第一次读取总是有效。但是第二次读取总是失败,无论是什么桶。这告诉我水桶没什么问题,但是在使用多个s3水桶时会有些奇怪。

我只能看到从单个s3存储桶读取多个文件的线程,而不是来自多个s3存储桶的多个文件。

有任何想法吗?

apache-spark amazon-s3 amazon-emr
1个回答
0
投票

你在someBucket2路径中缺少一个s3://前缀,所以它试图(默认)在hdfs中找到它

© www.soinside.com 2019 - 2024. All rights reserved.