AWS Lambda 无服务器框架部署错误

问题描述 投票:0回答:1

我想通过此命令使用无服务器框架将我的 Lambda 函数部署到 AWS Lambda。

serverless deploy --stage dev --region eu-central-1

这是我的
servless.yml
文件:

service: sensor-processor-v3


plugins:
  - serverless-webpack
  # - serverless-websockets-plugin

custom:
  secrets: ${file(secrets.yml):${self:provider.stage}}
  accessLogOnStage:
    dev: true
    prod: true
  nodeEnv:
    dev: development
    prod: production
  mqArn:
    dev:
    prod:

provider:
  name: aws
  runtime: nodejs18.x
  stage: ${opt:stage, 'dev'}
  region: eu-central-1
  logs:
    accessLogging: ${self:custom.accessLogOnStage.${self:provider.stage}}
    executionLogging: ${self:custom.accessLogOnStage.${self:provider.stage}}

  logRetentionInDays: 14
  memorySize: 128
  timeout: 30
  endpointType: REGIONAL
  environment:
    STAGE: ${self:provider.stage}
    NODE_ENV: ${self:custom.nodeEnv.${self:provider.stage}}
    REDIS_HOST_RW: !GetAtt RedisCluster.PrimaryEndPoint.Address
    REDIS_HOST_RO: !GetAtt RedisCluster.ReaderEndPoint.Address
    REDIS_PORT: !GetAtt RedisCluster.PrimaryEndPoint.Port
    SNIPEIT_INSTANCE_URL: ${self:custom.secrets.SNIPEIT_INSTANCE_URL}
    SNIPEIT_API_TOKEN: ${self:custom.secrets.SNIPEIT_API_TOKEN}
  apiGateway:
    apiKeySelectionExpression:
    apiKeySourceType: AUTHORIZER
    apiKeys:
      - ${self:service}-${self:provider.stage}
  iam:
    role:
      statements:
        - Effect: Allow
          Action:
            - ec2:CreateNetworkInterface
            - ec2:DescribeNetworkInterfaces
            - ec2:DeleteNetworkInterface
          Resource: "*"
        - Effect: Allow
          Action:
            - "dynamodb:PutItem"
            - "dynamodb:Query"
          Resource: { Fn::GetAtt: [ theThingsNetwork, Arn ] }
        - Effect: Allow
          Action:
            - "dynamodb:PutItem"
            - "dynamodb:Query"
          Resource: { Fn::GetAtt: [ loriotTable, Arn ] }
        - Effect: Allow
          Action:
            - firehose:DeleteDeliveryStream
            - firehose:PutRecord
            - firehose:PutRecordBatch
            - firehose:UpdateDestination
          Resource: '*'
        - Effect: Allow
          Action: lambda:InvokeFunction
          Resource: '*'
        - Effect: Allow
          Action:
            - s3:GetObject
            - s3:ListBucket
            - s3:PutObject
          Resource:
            - arn:aws:s3:::sensor-processor-v3-prod
            - arn:aws:s3:::sensor-processor-v3-prod/*
            - arn:aws:s3:::sensor-processor-v3-dev
            - arn:aws:s3:::sensor-processor-v3-dev/*
            - arn:aws:s3:::datawarehouse-redshift-dev
            - arn:aws:s3:::datawarehouse-redshift-dev/*
            - arn:aws:s3:::datawarehouse-redshift
            - arn:aws:s3:::datawarehouse-redshift/*

package:
  patterns:
    - '!README.md'
    - '!tools/rename-script.js'
    - '!secrets*'

functions:
  authorizer:
    handler: src/authorizer.handler
    memorySize: 128
    environment:
      STAGE: ${self:provider.stage}
      API_KEY_ALLOW: ${self:custom.secrets.API_KEY_ALLOW}
      USAGE_API_KEY: ${self:custom.secrets.USAGE_API_KEY}
  ibasxSend:
    handler: src/ibasxSend.ibasxSend
    memorySize: 256
    environment:
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
      NODE_TLS_REJECT_UNAUTHORIZED: 0
  processIbasxPayload:
    handler: src/processIbasxPayload.processor
    memorySize: 384
    timeout: 20
    environment:
      STAGE: ${self:provider.stage}
      LORIOT_DB: { Ref: loriotTable }
      OCCUPANCY_STREAM_NAME: { Ref: firehose }
      ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
      PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
      IBASX_DATA_SYNC_DB_NAME: ${self:custom.secrets.IBASX_DATA_SYNC_DB_NAME}
      IBASX_DATA_SYNC_DB_USER: ${self:custom.secrets.IBASX_DATA_SYNC_DB_USER}
      IBASX_DATA_SYNC_DB_PASSWORD: ${self:custom.secrets.IBASX_DATA_SYNC_DB_PASSWORD}
      IBASX_DATA_SYNC_DB_HOST: ${self:custom.secrets.IBASX_DATA_SYNC_DB_HOST}
      IBASX_DATA_SYNC_DB_PORT: ${self:custom.secrets.IBASX_DATA_SYNC_DB_PORT}
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
      CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
      CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
      CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
      CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
      CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
      FEATURES: snipeId
    vpc:
      securityGroupIds:
        - sg-0d7ec27d8c3e59a5f
      subnetIds:
        - subnet-093295e049fd0b192
        - subnet-0b4dd59bec892f1b5
        - subnet-0ba4e03f8d83d5cd4
  loriotConnector:
    handler: src/loriotConnector.connector
    memorySize: 384
    timeout: 20
    environment:
      STAGE: ${self:provider.stage}
      LORIOT_DB: { Ref: loriotTable }
      OCCUPANCY_STREAM_NAME: { Ref: firehose }
      ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
      PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
      CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
      CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
      CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
      CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
      CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
    vpc:
      securityGroupIds:
        - sg-0d7ec27d8c3e59a5f
      subnetIds:
        - subnet-093295e049fd0b192
        - subnet-0b4dd59bec892f1b5
        - subnet-0ba4e03f8d83d5cd4
    events:
      - http:
          path: loriot/uplink
          method: post
          # private: true
          authorizer:
            type: TOKEN
            name: authorizer
            identitySource: method.request.header.Authorization
  ibasxDiagnostics:
    handler: src/ibasxDiagnostics.diagnostics
    memorySize: 256
    timeout: 60
    vpc:
      securityGroupIds:
        - sg-0d7ec27d8c3e59a5f
      subnetIds:
        - subnet-093295e049fd0b192
        - subnet-0b4dd59bec892f1b5
        - subnet-0ba4e03f8d83d5cd4
  importDataFromS3:
    handler: src/importDataFromS3.importFn
    memorySize: 512
    timeout: 300
    environment:
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
  qrcodeSync:
    handler: src/qrcodeSync.sync
    memorySize: 256
    timeout: 30
    vpc:
      securityGroupIds:
        - sg-0d7ec27d8c3e59a5f
      subnetIds:
        - subnet-093295e049fd0b192
        - subnet-0b4dd59bec892f1b5
        - subnet-0ba4e03f8d83d5cd4
    environment:
      REDSHIFT_CLUSTER_TYPE: ${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
      ASSETS_DB_NAME: "${self:custom.secrets.ASSETS_DB_NAME}"
      ASSETS_DB_HOST: "${self:custom.secrets.ASSETS_DB_HOST}"
      ASSETS_DB_USER: "${self:custom.secrets.ASSETS_DB_USER}"
      ASSETS_DB_PASSWORD: "${self:custom.secrets.ASSETS_DB_PASSWORD}"
      ASSETS_DB_PORT: "${self:custom.secrets.ASSETS_DB_PORT}"
    events:
      - schedule: rate(5 minutes)
  # deduplicator:
  #   handler: src/deduplicator.deduplicate
  #   memorySize: 512
  #   environment:
  #     REDSHIFT_CLUSTER_TYPE: ${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}
  #     REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
  #     REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
  #     REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
  #     REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
  #     REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
  #   # events:
  #     # - schedule: rate(5 minutes)
  websocketMessage:
    handler: src/websocketConnector.onMessage
    memorySize: 256
    events:
      - websocket:
          route: '$default'
    environment:
      LORIOT_DB: { Ref: loriotTable }
      OCCUPANCY_STREAM_NAME: { Ref: firehose }
      ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
      PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
  wsAuthorizer:
    handler: src/authorizer.handler
    memorySize: 128
    environment:
      STAGE: ${self:provider.stage}
      API_KEY_ALLOW: ${self:custom.secrets.WS_API_KEY_ALLOW}
      USAGE_API_KEY: ${self:custom.secrets.USAGE_API_KEY}
  websocketConnect:
    handler: src/websocketConnect.connect
    environment:
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
    events:
      - websocket:
          route: $connect
          # routeKey: '\$default'
          authorizer:
            name: wsAuthorizer
            identitySource:
              - route.request.header.Authorization
      - websocket:
          route: $disconnect
  wifiConnector:
    handler: src/wifi.connector
    memorySize: 384
    vpc:
      securityGroupIds:
        - sg-0d7ec27d8c3e59a5f
      subnetIds:
        - subnet-093295e049fd0b192
        - subnet-0b4dd59bec892f1b5
        - subnet-0ba4e03f8d83d5cd4
    events:
      - http:
          path: wifi/uplink
          method: post
          authorizer:
            type: TOKEN
            name: authorizer
            identitySource: method.request.header.Authorization
    environment:
      STAGE: ${self:provider.stage}
      LORIOT_DB: { Ref: loriotTable }
      OCCUPANCY_STREAM_NAME: { Ref: firehose }
      ATMOSPHERIC_STREAM_NAME: { Ref: AtmosphericFirehose }
      PEOPLE_STREAM_NAME: { Ref: PeopleFirehose }
      IBASX_CLIENT_TOKEN: ${self:custom.secrets.IBASX_CLIENT_TOKEN}
      IBASX_CLIENT_URL: ${self:custom.secrets.IBASX_CLIENT_URL}
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}
      CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
      CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
      CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
      CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
      CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
  missingPowerBIData:
    handler: src/missingPowerBIData.update
    memorySize: 256
    timeout: 600
    environment:
      STAGE: ${self:provider.stage}
      CLIMATE_DB_NAME: "${self:custom.secrets.CLIMATE_DB_NAME}"
      CLIMATE_DB_HOST: "${self:custom.secrets.CLIMATE_DB_HOST}"
      CLIMATE_DB_USER: "${self:custom.secrets.CLIMATE_DB_USER}"
      CLIMATE_DB_PASSWORD: "${self:custom.secrets.CLIMATE_DB_PASSWORD}"
      CLIMATE_DB_PORT: "${self:custom.secrets.CLIMATE_DB_PORT}"
  kinesisEtl:
    timeout: 60
    handler: src/kinesisTransformer.kinesisTransformer
    environment:
      TZ: "Greenwich"
      ROUND_PERIOD: 360000 # 6 minutes
      ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
      ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
      ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
      ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
      ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
      ASSETS_DB_NAME: "${self:custom.secrets.ASSETS_DB_NAME}"
      ASSETS_DB_HOST: "${self:custom.secrets.ASSETS_DB_HOST}"
      ASSETS_DB_USER: "${self:custom.secrets.ASSETS_DB_USER}"
      ASSETS_DB_PASSWORD: "${self:custom.secrets.ASSETS_DB_PASSWORD}"
      ASSETS_DB_PORT: "${self:custom.secrets.ASSETS_DB_PORT}"
  atmosphericEtl:
    timeout: 60
    handler: src/atmosphericTransformer.atmosphericTransformer
    environment:
      TZ: "Greenwich"
      ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
      ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
      ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
      ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
      ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
  peopleEtl:
    timeout: 60
    handler: src/peopleTransformer.peopleTransformer
    environment:
      ROUND_PERIOD: 360000 # 6 minutes
      TZ: "Greenwich"
      ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
      ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
      ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
      ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
      ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
  updateSensorSlot:
    timeout: 60
    handler: src/updateSensorSlot.updateSensorSlot
    environment:
      ADMIN_DB_NAME: "${self:custom.secrets.ADMIN_DB_NAME}"
      ADMIN_DB_HOST: "${self:custom.secrets.ADMIN_DB_HOST}"
      ADMIN_DB_USER: "${self:custom.secrets.ADMIN_DB_USER}"
      ADMIN_DB_PASSWORD: "${self:custom.secrets.ADMIN_DB_PASSWORD}"
      ADMIN_DB_PORT: "${self:custom.secrets.ADMIN_DB_PORT}"
      REDSHIFT_CLUSTER_TYPE: ${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}
      REDSHIFT_DB_NAME: ${self:custom.secrets.REDSHIFT_DB_NAME}
      REDSHIFT_DB_USER: ${self:custom.secrets.REDSHIFT_DB_USER}
      REDSHIFT_DB_PASSWORD: ${self:custom.secrets.REDSHIFT_DB_PASSWORD}
      REDSHIFT_DB_HOST: ${self:custom.secrets.REDSHIFT_DB_HOST}
      REDSHIFT_DB_PORT: ${self:custom.secrets.REDSHIFT_DB_PORT}

resources: ${file(resources.yml)}

这是

resources.yml
文件:


---
Resources:
    firehoseRole:
      Type: AWS::IAM::Role
      Properties:
        RoleName: ${self:service}-${self:provider.stage}-FirehoseToS3Role
        AssumeRolePolicyDocument:
          Statement:
          - Effect: Allow
            Principal:
              Service:
              - firehose.amazonaws.com
            Action:
            - sts:AssumeRole
        Policies:
        - PolicyName: FirehoseToS3Policy
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - s3:AbortMultipartUpload
                  - s3:GetBucketLocation
                  - s3:GetObject
                  - s3:ListBucket
                  - s3:ListBucketMultipartUploads
                  - s3:PutObject
                Resource: '*'
        - PolicyName: FirehoseLogsPolicy
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - logs:CreateLogStream
                  - glue:GetTableVersions
                  - logs:CreateLogGroup
                  - logs:PutLogEvents
                Resource: '*'
        - PolicyName: FirehoseLambdaPolicy
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - lambda:InvokeFunction
                  - lambda:GetFunctionConfiguration
                  - kinesis:GetShardIterator
                  - kinesis:GetRecords
                  - kinesis:DescribeStream
                Resource: '*'
    serverlessKinesisFirehoseBucket:
      Type: AWS::S3::Bucket
      DeletionPolicy: Retain
      Properties:
        BucketName: "${self:service}-${self:provider.stage}"
        LifecycleConfiguration:
          Rules:
            - Status: Enabled
              ExpirationInDays: 90
    theThingsNetwork:
      Type:                    "AWS::DynamoDB::Table"
      Properties:
        TableName:             "${self:custom.secrets.TTN_DB}"
        PointInTimeRecoverySpecification:
          PointInTimeRecoveryEnabled: true
        AttributeDefinitions:
        - AttributeName:       "device"
          AttributeType:       "S"
        - AttributeName:       "timestamp"
          AttributeType:       "S"
        KeySchema:
        - AttributeName:       "device"
          KeyType:             "HASH"
        - AttributeName:       "timestamp"
          KeyType:             "RANGE"
        BillingMode: PAY_PER_REQUEST
    loriotTable:
      Type:                    "AWS::DynamoDB::Table"
      Properties:
        TableName:             "${self:custom.secrets.LORIOT_DB}"
        PointInTimeRecoverySpecification:
          PointInTimeRecoveryEnabled: true
        AttributeDefinitions:
        - AttributeName:       "device"
          AttributeType:       "S"
        - AttributeName:       "timestamp"
          AttributeType:       "S"
        KeySchema:
        - AttributeName:       "device"
          KeyType:             "HASH"
        - AttributeName:       "timestamp"
          KeyType:             "RANGE"
        BillingMode: PAY_PER_REQUEST
    processed:
      Type: "AWS::Redshift::Cluster"
      Properties:
        AutomatedSnapshotRetentionPeriod: "${self:custom.secrets.REDSHIFT_SNAPSHOT_RETENTION_PERIOD}"
        AllowVersionUpgrade:   true
        ClusterIdentifier:     "${self:custom.secrets.REDSHIFT_IDENTIFIER}"
        ClusterType:           "${self:custom.secrets.REDSHIFT_CLUSTER_TYPE}"
        DBName:                "${self:custom.secrets.REDSHIFT_DB_NAME}"
        MasterUsername:        "${self:custom.secrets.REDSHIFT_DB_USER}"
        MasterUserPassword:    "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
        Port:                  "${self:custom.secrets.REDSHIFT_DB_PORT}"
        NodeType:              "${self:custom.secrets.REDSHIFT_NODE_TYPE}"
        PubliclyAccessible:    true
        VpcSecurityGroupIds:   "${self:custom.secrets.REDSHIFT_SECURITY_GROUP_IDS}"
        ElasticIp:  "${self:custom.secrets.REDSHIFT_EIP}"
        ClusterSubnetGroupName: "${self:custom.secrets.REDSHIFT_SUBNET_GROUP}"
        # ClusterParameterGroupName: "${self:custom.secrets.REDSHIFT_PARAMETER_GROUP}"

    LogGroup:
      Type: AWS::Logs::LogGroup
      Properties:
        LogGroupName: ${self:service}-${self:provider.stage}-kinesis
        RetentionInDays: 30
    OccupancyS3LogStream:
      Type: AWS::Logs::LogStream
      Properties:
        LogGroupName: { Ref: LogGroup }
        LogStreamName: OccupancyS3LogStream
    OccupancyRedshiftLogStream:
      Type: AWS::Logs::LogStream
      Properties:
        LogGroupName: { Ref: LogGroup }
        LogStreamName: OccupancyRedshiftLogStream
    AtmosphericS3LogStream:
      Type: AWS::Logs::LogStream
      Properties:
        LogGroupName: { Ref: LogGroup }
        LogStreamName: AtmosphericS3LogStream
    AtmosphericRedshiftLogStream:
      Type: AWS::Logs::LogStream
      Properties:
        LogGroupName: { Ref: LogGroup }
        LogStreamName: AtmosphericRedshiftLogStream

    firehose:
      Type: AWS::KinesisFirehose::DeliveryStream
      Properties:
        DeliveryStreamName: ${self:service}-${self:provider.stage}
        DeliveryStreamType: DirectPut
        RedshiftDestinationConfiguration:
          ClusterJDBCURL: jdbc:redshift://${self:custom.secrets.REDSHIFT_IDENTIFIER}.copw8j1hahrq.eu-central-1.redshift.amazonaws.com:${self:custom.secrets.REDSHIFT_DB_PORT}/${self:custom.secrets.REDSHIFT_DB_NAME}
          CopyCommand:
            CopyOptions: "json 'auto' dateformat 'auto' timeformat 'auto'"
            DataTableName: "processed_data"
          Password: "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
          Username: "${self:custom.secrets.REDSHIFT_DB_USER}"
          RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          CloudWatchLoggingOptions:
            Enabled: true
            LogGroupName: { Ref: LogGroup }
            LogStreamName: { Ref: OccupancyRedshiftLogStream }
          S3Configuration:
            BucketARN: { Fn::GetAtt: [ serverlessKinesisFirehoseBucket, Arn ] }
            BufferingHints:
              IntervalInSeconds: 60
              SizeInMBs: 1
            CompressionFormat: UNCOMPRESSED
            CloudWatchLoggingOptions:
              Enabled: true
              LogGroupName: { Ref: LogGroup }
              LogStreamName: { Ref: OccupancyS3LogStream }
            RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          ProcessingConfiguration:
            Enabled: true
            Processors:
              - Parameters:
                - ParameterName: LambdaArn
                  ParameterValue: { Fn::GetAtt: [ KinesisEtlLambdaFunction, Arn ] }
                - ParameterName: BufferIntervalInSeconds
                  ParameterValue: "60"
                - ParameterName: BufferSizeInMBs
                  ParameterValue: "1"
                - ParameterName: NumberOfRetries
                  ParameterValue: "2"
                Type: Lambda
    AtmosphericFirehose:
      Type: AWS::KinesisFirehose::DeliveryStream
      Properties:
        DeliveryStreamName: ${self:service}-${self:provider.stage}-atmospheric
        DeliveryStreamType: DirectPut
        RedshiftDestinationConfiguration:
          ClusterJDBCURL: jdbc:redshift://${self:custom.secrets.REDSHIFT_IDENTIFIER}.copw8j1hahrq.eu-central-1.redshift.amazonaws.com:${self:custom.secrets.REDSHIFT_DB_PORT}/${self:custom.secrets.REDSHIFT_DB_NAME}
          CopyCommand:
            CopyOptions: "json 'auto' dateformat 'auto' timeformat 'auto'"
            DataTableName: "atmospheric_data"
          Password: "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
          Username: "${self:custom.secrets.REDSHIFT_DB_USER}"
          RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          CloudWatchLoggingOptions:
            Enabled: true
            LogGroupName: { Ref: LogGroup }
            LogStreamName: { Ref: AtmosphericRedshiftLogStream }
          S3Configuration:
            BucketARN: { Fn::GetAtt: [ serverlessKinesisFirehoseBucket, Arn ] }
            Prefix: atmospheric/
            BufferingHints:
              IntervalInSeconds: 60
              SizeInMBs: 1
            CompressionFormat: UNCOMPRESSED
            CloudWatchLoggingOptions:
              Enabled: true
              LogGroupName: { Ref: LogGroup }
              LogStreamName: { Ref: AtmosphericS3LogStream }
            RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          ProcessingConfiguration:
            Enabled: true
            Processors:
              - Parameters:
                - ParameterName: LambdaArn
                  ParameterValue: { Fn::GetAtt: [ AtmosphericEtlLambdaFunction, Arn ] }
                - ParameterName: BufferIntervalInSeconds
                  ParameterValue: "60"
                - ParameterName: BufferSizeInMBs
                  ParameterValue: "1"
                - ParameterName: NumberOfRetries
                  ParameterValue: "2"
                Type: Lambda
    PeopleFirehose:
      Type: AWS::KinesisFirehose::DeliveryStream
      Properties:
        DeliveryStreamName: ${self:service}-${self:provider.stage}-people
        DeliveryStreamType: DirectPut
        RedshiftDestinationConfiguration:
          ClusterJDBCURL: jdbc:redshift://${self:custom.secrets.REDSHIFT_IDENTIFIER}.copw8j1hahrq.eu-central-1.redshift.amazonaws.com:${self:custom.secrets.REDSHIFT_DB_PORT}/${self:custom.secrets.REDSHIFT_DB_NAME}
          CopyCommand:
            CopyOptions: "json 'auto' dateformat 'auto' timeformat 'auto'"
            DataTableName: "people_data"
          Password: "${self:custom.secrets.REDSHIFT_DB_PASSWORD}"
          Username: "${self:custom.secrets.REDSHIFT_DB_USER}"
          RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          CloudWatchLoggingOptions:
            Enabled: true
            LogGroupName: { Ref: LogGroup }
            LogStreamName: { Ref: AtmosphericRedshiftLogStream }
          S3Configuration:
            BucketARN: { Fn::GetAtt: [ serverlessKinesisFirehoseBucket, Arn ] }
            Prefix: people/
            BufferingHints:
              IntervalInSeconds: 60
              SizeInMBs: 1
            CompressionFormat: UNCOMPRESSED
            CloudWatchLoggingOptions:
              Enabled: true
              LogGroupName: { Ref: LogGroup }
              LogStreamName: { Ref: AtmosphericS3LogStream }
            RoleARN: { Fn::GetAtt: [ firehoseRole, Arn ] }
          ProcessingConfiguration:
            Enabled: true
            Processors:
              - Parameters:
                - ParameterName: LambdaArn
                  ParameterValue: { Fn::GetAtt: [ PeopleEtlLambdaFunction, Arn ] }
                - ParameterName: BufferIntervalInSeconds
                  ParameterValue: "60"
                - ParameterName: BufferSizeInMBs
                  ParameterValue: "1"
                - ParameterName: NumberOfRetries
                  ParameterValue: "2"
                Type: Lambda
    RedisCluster:
      Type: 'AWS::ElastiCache::ReplicationGroup'
      Properties:
        AutoMinorVersionUpgrade: true
        ReplicationGroupId: "${self:custom.secrets.REDIS_CACHE_CLUSTER_NAME}"
        ReplicationGroupDescription: "${self:custom.secrets.REDIS_CACHE_CLUSTER_NAME}"
        CacheNodeType: cache.t4g.micro
        Engine: redis
        ReplicasPerNodeGroup: 3
        NumNodeGroups: 1
        EngineVersion: '7.0'
        MultiAZEnabled: true
        AutomaticFailoverEnabled: true
        PreferredMaintenanceWindow: 'sat:01:45-sat:04:45'
        SnapshotRetentionLimit: 4
        SnapshotWindow: '00:30-01:30'
        CacheSubnetGroupName: mm-vpc-cache
        SecurityGroupIds:
          - sg-07663c145bf3feb84
          - sg-0d7ec27d8c3e59a5f

失败并显示错误消息

Error: CREATE_FAILED: serverlessKinesisFirehoseBucket (AWS::S3::Bucket) sensor-processor-v3-dev already exists

我调查了该问题并确定我的 S3 存储桶在所有 AWS 区域中必须具有唯一的名称

  • 问题 1: 我没有对 S3 存储桶进行任何更改。我还应该提供它吗?我不想更改现有 S3 存储桶的名称。但我想使用我现有的存储桶。

我调整了 AWS S3 存储桶以避免测试期间出现错误。但是,它仍然失败并显示错误消息

Error: CREATE_FAILED: RedisCluster (AWS::ElastiCache::ReplicationGroup) Cache subnet group 'mm-vpc-cache' does not exist. (Service: AmazonElastiCache; Status Code: 400; Error Code: CacheSubnetGroupNotFoundFault; Request ID: 2cbfadb2-8086-4ce8-ae61-1d75dcaaa1aa; Proxy: null)

  • 问题 2::我的
    resources.yml
    文件是否已过时且未与我的 AWS 资源同步?
amazon-s3 aws-lambda aws-cloudformation serverless-framework amazon-vpc
1个回答
0
投票

问题 1:为此,您需要更新

serverlessKinesisFirehoseBucket
文件中的
resources.yml
资源。不要创建新存储桶,而是使用其 ARN 或名称引用现有 S3 存储桶,具体取决于 Firehose 传输流配置的
BucketARN
部分中的
RedshiftDestinationConfiguration
属性的要求。

关于您遇到的有关 RedisCluster 的错误消息,缓存子网组似乎存在问题。确保名称为“mm-vpc-cache”的子网组实际存在于您的 AWS 账户以及您部署资源的区域中。仔细检查子网组名称并确保其正确。如果子网组不存在,您需要使用适当的子网创建它。

问题 2:必须确保

resources.yml
中的定义与您在 AWS 环境中拥有的实际资源相匹配。如果您直接在 AWS 控制台中或通过其他方式对资源进行了更改,您的
resources.yml
可能不会反映这些更改。

要解决此问题,您应该将

resources.yml
文件中的定义与 AWS 控制台中的资源进行比较,注意任何差异。如果您在无服务器框架之外进行了更改,您可能需要相应地更新
resources.yml
以反映正确的配置。

此外,您提供的有关

Cache subnet group 'mm-vpc-cache' does not exist
的错误消息表明引用的缓存子网组可能存在问题。确保子网组
mm-vpc-cache
存在于您的 AWS 环境中,并且在您的
resources.yml
文件中正确定义。

© www.soinside.com 2019 - 2024. All rights reserved.