将 VPC 终端节点接口添加到 terraform 中的 NLB 目标组

问题描述 投票:0回答:1

我想在指定区域(例如(eu-central-1))创建一个具有给定 CIDR(例如(10.0.0.0/21))的 VPC。 对于该区域中的每个可用可用区,我想创建一个私有子网,其 CIDR 是根据主 VPC CIDR 计算得出的。 这些私有子网中的每一个都应托管一个或多个 VPC 端点接口,具体取决于端点映射中指定的数量。 我想为端点映射中的每个条目创建一个 NLB(网络负载均衡器)。此 NLB 的目标组应包含相应 VPC 终端节点接口的 IP。 最后,应为每个 NLB 创建一个端点服务,以匹配端点映射中的条目。 因此,本质上,端点映射中的每个条目都会导致一组不同的 VPC 端点接口、具有包含这些接口的目标组的相应 NLB,以及指向该 NLB 的端点服务。

下面是我的地形代码:

this is main.tf:
module "vpc" {
  source              = "./modules/vpc"
  environment         = var.ENVIRONMENT
  vpc_cidr            = var.vpc_cidr
  private_subnet_cidr = local.subnet_cidr
  subnet_az           = data.aws_availability_zones.az.names
  endpoints           = var.endpoints
  nlb_name          = "NLB"
  target_group_name = "NLB-TG"
}

these are the variables i pass (endpoints variable is of focus here as it would be needed ahead)

variable "vpc_cidr" {
  description = "CIDR for VPC to be deployed"
  type        = string
  default ="20.0.0.0/21"
}

variable "aws_region" {
  description = "AWS region to deploy resources in"
  type        = string
  default = "eu-central-1"
}

variable "endpoints" {
  description = "Details for endpoint creation"
  type = map(object({
    service_name = string
    suffix       = string
    dns_name     = string
  }))
  default = {
    "first" = {
      service_name = "com.amazonaws.vpce.eu-central-1.service_name"
      suffix       = "custom"
      dns_name     = "customtest.endpoints.test.trusted.key.in"
    },
    "second": {
      "service_name" : "com2......",
      "suffix" : "main2",
      "dns_name" : "accountid2main2.endpoints.test2.trusted.sxk.in"
    }
  }
}

and finally my vpc.tf file inside the modules folder:

data "aws_region" "current" {}

locals {
  eni_keys = [
    for k, v in aws_vpc_endpoint.vpc_endpoint_interface : [
      for i, eni_id in v.network_interface_ids : {
        key = "${k}-${i}"
        eni = eni_id
      }
    ]
  ]
}
#above locals is something like this
#[
#{ key = "endpoint1-0", eni = "eni-abc" },
#{ key = "endpoint1-1", eni = "eni-def" },
#{ key = "endpoint2-0", eni = "eni-xyz" }
#]

resource "aws_vpc" "vpc" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true

  tags = {
    "Name"                    =  "vpc-${var.environment}"
    "owner"                   =   var.owner 
    "environment"             =   var.environment
    "data:classification" =   var.classification 
    "iac-type"            =   var.iac_type 
    "iac-stack-name"      =   var.stack_name
  }
}

resource "aws_default_security_group" "default" {
  vpc_id = aws_vpc.vpc.id
  tags = {
    "Name"                    = "sg-vpc_default"
    "owner"                   =   var.owner 
    "environment"             =   var.environment
    "data:classification" =   var.classification 
    "iac-type"            =   var.iac_type 
    "iac-stack-name"      =   var.stack_name
  }
}


resource "aws_subnet" "private_subnets_zone" {
  count             = length(var.private_subnet_cidr) > 0 ? length(var.private_subnet_cidr) : 0
  vpc_id            = aws_vpc.vpc.id
  cidr_block        = element(var.private_subnet_cidr, count.index)
  availability_zone = element(var.subnet_az, count.index)
  tags = {
    "Name"                    =  "subnet-private-${element(var.subnet_az, count.index)}-${var.environment}"
    "owner"                   =   var.owner 
    "environment"             =   var.environment
    "data:classification" =   var.classification 
    "iac-type"            =   var.iac_type 
    "iac-stack-name"      =   var.stack_name
  }
}


resource "aws_route_table" "private_route_table" {
  count  = length(var.private_subnet_cidr) > 0 ? length(var.private_subnet_cidr) : 0
  vpc_id = aws_vpc.vpc.id
  tags = {
    "Name"                    =  "rt-private-${element(var.subnet_az, count.index)}-${var.environment}"
    "owner"                   =   var.owner 
    "environment"             =   var.environment
    "data:classification" =   var.classification 
    "iac-type"            =   var.iac_type 
    "iac-stack-name"      =   var.stack_name
  }
}


resource "aws_route_table_association" "private_rt_assoc" {
  count          = length(var.private_subnet_cidr) > 0 ? length(var.private_subnet_cidr) : 0
  subnet_id      = element(aws_subnet.private_subnets_zone.*.id, count.index)
  route_table_id = element(aws_route_table.private_route_table.*.id, count.index)
}

resource "aws_security_group" "security_group" {
  description = "Security group for all vpc endpoints"
  vpc_id      =  aws_vpc.vpc.id
  name        =  "sg-vpc_endpoint"

  tags = {
    "Name"                    =  "sg-vpc_endpoint"
    "owner"                   =   var.owner 
    "environment"             =   var.environment
    "data:classification" =   var.classification 
    "iac-type"            =   var.iac_type 
    "iac-stack-name"      =   var.stack_name
  }
  lifecycle  {
    create_before_destroy=true
 }
}

resource "aws_security_group" "security_group_service" {
  description = "Security group for Services to access all vpc endpoints"
  vpc_id      =  aws_vpc.vpc.id
  name        =  "sg-vpc_endpoint-access"

  tags = {
    "Name"                    =  "sg-vpc_endpoint-access"
    "owner"                   =   var.owner 
    "environment"             =   var.environment
    "data:classification" =   var.classification 
    "iac-type"            =   var.iac_type 
    "iac-stack-name"      =   var.stack_name
  }
  lifecycle  {
    create_before_destroy=true
 }
}

resource "aws_security_group_rule" "security_group_rule" {
  type                       =  "ingress"
  from_port                  =  443
  to_port                    =  443
  protocol                   =  "tcp"
  description                =  "Allow traffic from sg-vpc_endpoint-access security group"
  source_security_group_id   =  aws_security_group.security_group_service.id
  security_group_id          =  aws_security_group.security_group.id
}

resource "aws_security_group_rule" "security_group_rule_service_sg" {
  type                       =  "egress"
  from_port                  =  443
  to_port                    =  443
  protocol                   =  "tcp"
  description                =  "Allow traffic to interface vpc endpoints"
  source_security_group_id   =  aws_security_group.security_group.id
  security_group_id          =  aws_security_group.security_group_service.id
}


resource "aws_vpc_endpoint" "vpc_endpoint_interface" {
  for_each = var.endpoints
  vpc_id              =   aws_vpc.vpc.id
  service_name        =   each.value.service_name
  vpc_endpoint_type   =  "Interface"
  security_group_ids  =  [aws_security_group.security_group.id]
  depends_on          =  [aws_security_group.security_group]
  private_dns_enabled =  true
  subnet_ids          =  aws_subnet.private_subnets_zone.*.id
  tags = {
    "Name"                    = "${each.value.service_name}-interface-${data.aws_region.current.name}-${var.environment}"
    "owner"                   =   var.owner 
    "environment"             =   var.environment
    "data:classification" =   var.classification 
    "iac-type"            =   var.iac_type 
    "iac-stack-name"      =   var.stack_name
  }
}

data "aws_network_interface" "vpc_endpoint_nics" {
  for_each = { for item in flatten(local.eni_keys) : item.key => item }
  id = each.value.eni
}

#above data is something like this
#("eni-abc", "eni-def", "eni-xyz")

resource "aws_lb" "customer_nlb" {
  for_each = var.endpoints
  name               = "${each.key}-${var.nlb_name}"
  internal           = true
  load_balancer_type = "network"
  subnets            = aws_subnet.private_subnets_zone.*.id
  # enable_deletion_protection = true
}


resource "aws_lb_target_group" "customer_target_group" {
  for_each = var.endpoints
  name        = "${each.key}-${var.target_group_name}"
  vpc_id      = aws_vpc.vpc.id
  protocol    = "TCP"
  port        = "443"
  target_type = "ip"

  health_check {
    enabled             = true
    interval            = 30
    path                = "/"
    port                = "traffic-port"
    healthy_threshold   = 5
    unhealthy_threshold = 2
    timeout             = 10
    protocol            = "HTTPS"
    matcher             = "200-399"
  }
}

resource "aws_lb_listener" "customer_nlb_listener" {
  for_each = var.endpoints
  load_balancer_arn = aws_lb.customer_nlb[each.key].arn
  port              = "443"
  protocol          = "TCP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.customer_target_group[each.key].arn
  }
}

resource "aws_lb_target_group_attachment" "customer_target_group_attachment" {
  for_each = { for item in flatten(local.eni_keys) : item.key => item }
  target_group_arn = aws_lb_target_group.customer_target_group[element(split("-", each.key), 0)].arn
  target_id        = data.aws_network_interface.vpc_endpoint_nics[each.key].private_ip
  port             = 443
}

resource "aws_vpc_endpoint_service" "endpoint_service" {
  for_each                   = var.endpoints
  acceptance_required        = false
  private_dns_name           = each.value.dns_name
  network_load_balancer_arns = [aws_lb.customer_nlb[each.key].arn]
  tags = {
    "Name"                    =  "vpc-endpoint-service-${var.environment}"
    "owner"                   =   var.owner
    "environment"             =   var.environment
    "data:classification" =   var.classification
    "iac-type"            =   var.iac_type
    "iac-stack-name"      =   var.stack_name
  }
}

所以问题是当我运行 terraform plan 或应用它时给我这个错误 计划:22个添加,0个改变,0个销毁。 ╷ │ 错误:for_each 参数无效 │ │ 在模块 pc pc.tf 第 160 行,数据“aws_network_interface”“vpc_endpoint_nics”中: │ 160: for_each = { 平展中的项目(local.eni_keys) : item.key => item } │ ├──────────────── │ │ local.eni_keys 是有 1 个元素的元组 │ │“for_each”映射包含从资源属性派生的键,这些属性在应用之前无法确定,因此 Terraform 无法确定将标识该资源实例的完整键集。 │ │ 在 for_each 中使用未知值时,最好在配置中静态定义映射键,并将应用时结果仅放置在映射值中。 │ │ 或者,您可以使用 -target 规划选项首先仅应用 for_each 值所依赖的资源,然后再应用第二次以完全收敛。

但是如果注释掉 locals、data 和 target_group_attachment 部分并部署基础设施 然后在第二个应用中取消注释上面的项目,然后将 vpc 端点接口添加到 NLb 目标组

我怎样才能使用单一地形计划完全部署此解决方案或应用

保持主要要求:

端点映射中的每个条目都会指向一组不同的 VPC 端点接口、具有包含这些接口的目标组的相应 NLB,以及指向该 NLB 的端点服务。

amazon-web-services terraform vpc nlb vpc-endpoint
1个回答
0
投票

但是如果注释掉 locals、data 和 target_group_attachment 部分并部署基础设施,然后在第二次应用中取消注释上述项目,然后将 vpc 端点接口添加到 NLb 目标组

如果您想保留当前的代码,这正是您应该做的。这是解决您问题的正确方法。

我怎样才能使用单一地形计划完全部署此解决方案或应用

如果不完全改变你的代码和架构,这是不可能的。该错误清楚地表明您的 for_each 取决于“在应用之前无法确定”的值。在 TF 中,一切必须在计划时就知道

© www.soinside.com 2019 - 2024. All rights reserved.