如何删除或清除S3上的旧文件?

问题描述 投票:0回答:6

是否有现有的解决方案可以删除超过 x 天的任何文件?

amazon-s3 timestamp delete-file s3cmd purge
6个回答
95
投票

亚马逊最近推出了对象过期

Amazon S3 宣布对象过期

Amazon S3 宣布了新的 功能,对象过期,允许您安排删除 在预定义的时间段后您的对象。使用对象过期 安排定期清除物体,无需您 识别要删除的对象并向亚马逊提交删除请求 S3.

您可以为一组对象定义对象过期规则 你的水桶。每个对象过期规则允许您指定一个 前缀和有效期(以天为单位)。前缀字段(例如

logs/
)标识受过期规则约束的对象,并且 过期期限指定从创建日期算起的天数 (即年龄)之后应移除物体。一旦物体 已过期,它们将排队等待删除。你 对象在其存储之日或之后将不会被收取存储费用 有效期。


5
投票

这里有一些有关如何操作的信息...

http://docs.amazonwebservices.com/AmazonS3/latest/dev/ObjectExpiration.html

希望这有帮助。


4
投票

以下是如何使用 CloudFormation 模板实现它:

  JenkinsArtifactsBucket:
    Type: "AWS::S3::Bucket"
    Properties:
      BucketName: !Sub "jenkins-artifacts"
      LifecycleConfiguration:
        Rules:
          - Id: "remove-old-artifacts"
            ExpirationInDays: 3
            NoncurrentVersionExpirationInDays: 3
            Status: Enabled

这将创建一个生命周期规则,如 @Ravi Bhatt 所解释的

阅读更多相关内容: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-lifecycleconfig-rule.html

对象生命周期管理的工作原理: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html


2
投票

您可以使用 AWS S3 生命周期规则使文件过期并删除它们。您所要做的就是选择存储桶,单击“添加生命周期规则”按钮并进行配置,AWS 将为您处理它们。

您可以参考 Joe 的以下博客文章以获取分步说明。其实很简单:

https://www.joe0.com/2017/05/24/amazon-s3-how-to-delete-files-older-than-x-days/

希望有帮助!


1
投票

这是一个删除 N 天前的文件的 Python 脚本

from boto3 import client, Session
from botocore.exceptions import ClientError
from datetime import datetime, timezone
import argparse

if __name__ == '__main__':

    parser = argparse.ArgumentParser()
    
    parser.add_argument('--access_key_id', required=True)
    parser.add_argument('--secret_access_key', required=True)
    parser.add_argument('--delete_after_retention_days', required=False, default=15)
    parser.add_argument('--bucket', required=True)
    parser.add_argument('--prefix', required=False, default="")
    parser.add_argument('--endpoint', required=True)

    args = parser.parse_args()

    access_key_id = args.access_key_id
    secret_access_key = args.secret_access_key
    delete_after_retention_days = int(args.delete_after_retention_days)
    bucket = args.bucket
    prefix = args.prefix
    endpoint = args.endpoint

    # get current date
    today = datetime.now(timezone.utc)

    try:
        # create a connection to Wasabi
        s3_client = client(
            's3',
            endpoint_url=endpoint,
            access_key_id=access_key_id,
            secret_access_key=secret_access_key)
    except Exception as e:
        raise e

    try:
        # list all the buckets under the account
        list_buckets = s3_client.list_buckets()
    except ClientError:
        # invalid access keys
        raise Exception("Invalid Access or Secret key")

    # create a paginator for all objects.
    object_response_paginator = s3_client.get_paginator('list_object_versions')
    if len(prefix) > 0:
        operation_parameters = {'Bucket': bucket,
                                'Prefix': prefix}
    else:
        operation_parameters = {'Bucket': bucket}

    # instantiate temp variables.
    delete_list = []
    count_current = 0
    count_non_current = 0

    print("$ Paginating bucket " + bucket)
    for object_response_itr in object_response_paginator.paginate(**operation_parameters):
        for version in object_response_itr['Versions']:
            if version["IsLatest"] is True:
                count_current += 1
            elif version["IsLatest"] is False:
                count_non_current += 1
            if (today - version['LastModified']).days > delete_after_retention_days:
                delete_list.append({'Key': version['Key'], 'VersionId': version['VersionId']})

    # print objects count
    print("-" * 20)
    print("$ Before deleting objects")
    print("$ current objects: " + str(count_current))
    print("$ non-current objects: " + str(count_non_current))
    print("-" * 20)

    # delete objects 1000 at a time
    print("$ Deleting objects from bucket " + bucket)
    for i in range(0, len(delete_list), 1000):
        response = s3_client.delete_objects(
            Bucket=bucket,
            Delete={
                'Objects': delete_list[i:i + 1000],
                'Quiet': True
            }
        )
        print(response)

    # reset counts
    count_current = 0
    count_non_current = 0

    # paginate and recount
    print("$ Paginating bucket " + bucket)
    for object_response_itr in object_response_paginator.paginate(Bucket=bucket):
        if 'Versions' in object_response_itr:
            for version in object_response_itr['Versions']:
                if version["IsLatest"] is True:
                    count_current += 1
                elif version["IsLatest"] is False:
                    count_non_current += 1

    # print objects count
    print("-" * 20)
    print("$ After deleting objects")
    print("$ current objects: " + str(count_current))
    print("$ non-current objects: " + str(count_non_current))
    print("-" * 20)
    print("$ task complete")

这是我的运行方式

python s3_cleanup.py --aws_access_key_id="access-key" --aws_secret_access_key="secret-key-here" --endpoint="https://s3.us-west-1.wasabisys.com" --bucket="ondemand-downloads" --prefix="" --delete_after_retention_days=5

如果您只想从特定文件夹中删除文件,请使用

prefix
参数


0
投票

您可以使用以下 Powershell 脚本来删除

x days
之后过期的对象。

[CmdletBinding()]
Param(
  [Parameter(Mandatory=$True)]
  [string]$BUCKET_NAME,             #Name of the Bucket

  [Parameter(Mandatory=$True)]
  [string]$OBJ_PATH,                #Key prefix of s3 object (directory path)

  [Parameter(Mandatory=$True)]
  [string]$EXPIRY_DAYS             #Number of days to expire
)

$CURRENT_DATE = Get-Date
$OBJECTS = Get-S3Object $BUCKET_NAME -KeyPrefix $OBJ_PATH
Foreach($OBJ in $OBJECTS){
    IF($OBJ.key -ne $OBJ_PATH){
        IF(($CURRENT_DATE - $OBJ.LastModified).Days -le $EXPIRY_DAYS){
            Write-Host "Deleting Object= " $OBJ.key
            Remove-S3Object -BucketName $BUCKET_NAME -Key $OBJ.Key -Force
        }
    }
}
© www.soinside.com 2019 - 2024. All rights reserved.