为什么我得到:
AmazonS3Exception:禁止;请求:HEAD https://my-bucket.s3.us-east-1.amazonaws.com some_folder/part-00001-tid-4355744669774358191-8bcd5132-03de-4047-83a5-51757d8a717a-21-1-c000。 csv
当我尝试从 apache Spark/databricks 写入 S3 时?
从这个(和其他存储桶)读取数据效果很好。只有写入失败。
S3 权限如下所示:
{
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::some-bucket",
"arn:aws:s3:::my-bucket"
],
"Sid": ""
},
{
"Action": [
"s3:PutObjectAcl",
"s3:PutObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-bucket/*",
"Sid": ""
}
],
"Version": "2012-10-17"
}
如何解决这个问题?
所有
s3:*Object
操作都应在资源末尾有 /*
(除非资源 ARN 指向特定存储桶)。