Alpakka S3连接问题

问题描述 投票:0回答:1

[我正在尝试使用Alpakka S3连接到minio实例以存储文件,但是由于将库版本从1.1.2升级到2.0.0,所以我遇到了问题。

这里是一个简单的服务类,只有两个尝试创建存储桶的方法。我尝试了两种方法,首先是从本地配置文件(在我的情况下为application.conf)加载alpakka设置,其次是通过S3Ext直接创建设置。

两种方法都失败,我不确定这个问题。关于错误,似乎设置未正确加载,但是我不知道我在做什么错。

我正在使用什么:

  • 播放框架2.8.1
  • scala 2.13.2
  • akka-stream-alpakka-s3 2.0.0
  • 这里是服务类别:

package services

import akka.actor.ActorSystem
import akka.stream.alpakka.s3._
import akka.stream.alpakka.s3.scaladsl.S3
import akka.stream.scaladsl.Sink
import akka.stream.{Attributes, Materializer}
import javax.inject.{Inject, Singleton}
import software.amazon.awssdk.auth.credentials.{AwsBasicCredentials, AwsCredentials, AwsCredentialsProvider}
import software.amazon.awssdk.regions.Region
import software.amazon.awssdk.regions.providers.AwsRegionProvider

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future

@Singleton
class AlpakkaS3PlaygroundService @Inject()(
    materializer: Materializer,
    system: ActorSystem,
) {

  def makeBucket(bucketName: String): Future[String] = {
    S3.makeBucket(bucketName)(materializer) map { _ =>
      "bucket created"
    }
  }

  def makeBucket2(bucketName: String): Future[String] = {

    val s3Host      = "http://localhost:9000"
    val s3AccessKey = "access_key"
    val s3SecretKey = "secret_key"
    val s3Region    = "eu-central-1"

    val credentialsProvider = new AwsCredentialsProvider {
      override def resolveCredentials(): AwsCredentials = AwsBasicCredentials.create(s3AccessKey, s3SecretKey)
    }

    val regionProvider = new AwsRegionProvider {
      override def getRegion: Region = Region.of(s3Region)
    }

    val settings: S3Settings = S3Ext(system).settings
      .withEndpointUrl(s3Host)
      .withBufferType(MemoryBufferType)
      .withCredentialsProvider(credentialsProvider)
      .withListBucketApiVersion(ApiVersion.ListBucketVersion2)
      .withS3RegionProvider(regionProvider)

    val attributes: Attributes = S3Attributes.settings(settings)

    S3.makeBucketSource(bucketName)
      .withAttributes(attributes)
      .runWith(Sink.head)(materializer) map { _ =>
      "bucket created"
    }
  }
}

application.conf中的配置如下:

akka.stream.alpakka.s3 {
  aws {
    credentials {
      provider = static
      access-key-id = "access_key"
      secret-access-key = "secret_key"
    }
    region {
      provider = static
      default-region = "eu-central-1"
    }
  }
  endpoint-url = "http://localhost:9000"
}

如果使用服务的第一种方法(makeBucket(...)),则会看到此错误:

SdkClientException: Unable to load region from any of the providers in the chain software.amazon.awssdk.regions.providers.DefaultAwsRegionProviderChain@34cb16dc: 
[software.amazon.awssdk.regions.providers.SystemSettingsRegionProvider@804e08b: Unable to load region from system settings. Region must be specified either via environment variable (AWS_REGION) or  system property (aws.region)., software.amazon.awssdk.regions.providers.AwsProfileRegionProvider@4d5f4b4d: No region provided in profile: default, software.amazon.awssdk.regions.providers.InstanceProfileRegionProvider@557feb58: Unable to contact EC2 metadata service.]

错误消息非常准确,我知道出了什么问题,但是我只是不知道该怎么办,因为我指定了文档中概述的设置。有什么主意吗?

[在服务的第二种方法(makeBucket2(...))中,我尝试显式设置S3设置,但这似乎也不起作用。错误看起来像这样:

play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[S3Exception: 404 page not found
]]
    at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:335)
    at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:253)
    at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:424)
    at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:420)
    at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:453)
    at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
    at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:92)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
    at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:92)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:47)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:47)
    at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
    at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
    at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
    at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
    at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
Caused by: akka.stream.alpakka.s3.S3Exception: 404 page not found

这里似乎根本没有考虑定义的设置,因为似乎找不到该服务。这实际上是我在软件的旧版本中使用的方法,在该版本中,我使用的是akka-stream-alpakka-s3 1.1.2版,并且按预期工作。

当然,我不仅想使用Alpakka S3创建存储桶,而且为了展示这个橱窗并概述我的问题,我仅以这个示例为例进行了简单说明。我想,如果解决了这个问题,alpakka提供的所有其他方法都将起作用。

我确实多次修改了文档,但仍然无法解决此问题,所以希望这里有人可以帮助我。

我正在尝试使用Alpakka S3连接到minio实例以存储文件,但是由于我将库版本从1.1.2升级到了2.0.0,所以我遇到了问题。这是一个简单的服务...

scala amazon-s3 playframework akka alpakka
1个回答
0
投票

至少从2.0.0版本开始,Alpakka S3的配置路径现在为alpakka.s3而不是akka.stream.alpakka.s3

alpakka.s3 {
  aws {
    credentials {
      provider = static
      access-key-id = "access_key"
      secret-access-key = "secret_key"
    }
    region {
      provider = static
      default-region = "eu-central-1"
    }
  }
  endpoint-url = "http://localhost:9000"
}
© www.soinside.com 2019 - 2024. All rights reserved.