我已经连接到实例,我想将从我的python脚本生成的文件直接上传到S3。我试过这个:
import boto
s3 = boto.connect_s3()
bucket = s3.get_bucket('alexandrabucket')
from boto.s3.key import Key
key = bucket.new_key('s0').set_contents_from_string('some content')
但这是在创建一个带有上下文“相同内容”的新文件s0,而我想将目录s0上传到mybucket。
我也看了s3put,但我没有得到我想要的东西。
boto
库本身没有任何内容可以让您上传整个目录。您可以使用os.walk
或类似代码编写自己的代码来遍历目录,并使用boto上传每个单独的文件。
在boto中有一个名为s3put
的命令行实用程序可以处理这个问题,或者你可以使用AWS CLI tool,它具有许多功能,允许你上传整个目录甚至将S3存储桶与本地目录同步,反之亦然。
以下功能可用于通过boto将目录上传到s3。
def uploadDirectory(path,bucketname):
for root,dirs,files in os.walk(path):
for file in files:
s3C.upload_file(os.path.join(root,file),bucketname,file)
提供目录和存储桶名称的路径作为输入。文件直接放入存储桶中。更改upload_file()函数的最后一个变量,将它们放在“目录”中。
您可以执行以下操作:
import os
import boto3
s3_resource = boto3.resource("s3", region_name="us-east-1")
def upload_objects():
try:
bucket_name = "S3_Bucket_Name" #s3 bucket name
root_path = 'D:/sample/' # local folder for upload
my_bucket = s3_resource.Bucket(bucket_name)
for path, subdirs, files in os.walk(root_path):
path = path.replace("\\","/")
directory_name = path.replace(root_path,"")
for file in files:
my_bucket.upload_file(os.path.join(path, file), directory_name+'/'+file)
except Exception as err:
print(err)
if __name__ == '__main__':
upload_objects()
从文件夹中读取文件我们可以使用
import boto
from boto.s3.key import Key
keyId = 'YOUR_AWS_ACCESS_KEY_ID'
sKeyId='YOUR_AWS_ACCESS_KEY_ID'
bucketName='your_bucket_name'
conn = boto.connect_s3(keyId,sKeyId)
bucket = conn.get_bucket(bucketName)
for key in bucket.list():
print ">>>>>"+key.name
pathV = key.name.split('/')
if(pathV[0] == "data"):
if(pathV[1] != ""):
srcFileName = key.name
filename = key.name
filename = filename.split('/')[1]
destFileName = "model/data/"+filename
k = Key(bucket,srcFileName)
k.get_contents_to_filename(destFileName)
elif(pathV[0] == "nlu_data"):
if(pathV[1] != ""):
srcFileName = key.name
filename = key.name
filename = filename.split('/')[1]
destFileName = "model/nlu_data/"+filename
k = Key(bucket,srcFileName)
k.get_contents_to_filename(destFileName)