AKS LoadBalancer 外部 IP 卡在 <pending>

问题描述 投票:0回答:3

我使用以下 Terraform 在 Azure 中创建了一个 Kubernetes 集群。 Azure Kubernetes 服务集群是使用应用程序网关创建的,因为我想使用应用程序网关作为入口控制器。

注意:我还可以看到它仍然使用公共IP创建负载均衡器。我没想到会创建负载均衡器。

并且系统分配的身份具有负载均衡器上的贡献者访问权限

但是,外部 IP 停留在“pending”状态。

data "azurerm_subnet" "aks-subnet" {
  name                 = "aks-subnet"
  virtual_network_name = "np-dat-spoke-vnet"
  resource_group_name  = "ipz12-dat-np-connect-rg"
}

data "azurerm_subnet" "appgateway-subnet" {
  name                 = "appgateway-subnet"
  virtual_network_name = "np-dat-spoke-vnet"
  resource_group_name  = "ipz12-dat-np-connect-rg"
}

locals {
  backend_address_pool_name      = "appgateway-beap"
  frontend_port_name             = "appgateway-feport"
  frontend_ip_configuration_name = "appgateway-feip"
  http_setting_name              = "appgateway-be-htst"
  listener_name                  = "appgateway-httplstn"
  request_routing_rule_name      = "appgateway-rqrt"
  app_gateway_subnet_name        = "appgateway-subnet"
}

# Create the Container Registry
module "container_registry" {
  source                        = "./modules/container_registry"
  count                         = var.enable_container_registry == true ? 1 : 0
  #name_override                = "crprjxhubprodwestus3001"
  app_or_service_name           = "prjx"                                                             # var.app_or_service_name
  subscription_type             = var.subscription_type                                               # "hub" 
  environment                   = var.environment                                                     # "prod"
  location                      = var.location                                                        # "westus3"
  instance_number               = var.instance_number                                                 # "001"
  tags                          = var.tags              
  resource_group_name           = module.resource_group_container_registry[0].name                    # "rg-cr-hub-prod-westus3-001"
  sku                           = "Premium"
  admin_enabled                 = false
  public_network_access_enabled = false
}

# Create Resource Group for Kubernetes Cluster
module "resource_group_kubernetes_cluster" {
  source                  = "./modules/resource_group"
  count                   = var.enable_kubernetes == true ? 1 : 0
  #name_override          = "rg-aks-spoke-dev-westus3-001"
  app_or_service_name     = "aks"                                   # var.app_or_service_name
  subscription_type       = var.subscription_type                   # "spoke"   
  environment             = var.environment                         # "dev"    
  location                = var.location                            # "westus3"
  instance_number         = var.instance_number                     # "001"    
  tags                    = var.tags
}

# Application Gateway Public Ip 
resource "azurerm_public_ip" "test" {
  name                = "publicIp1"
  location            = var.location
  resource_group_name = module.resource_group_kubernetes_cluster[0].name
  allocation_method   = "Static"
  sku                 = "Standard"
}

resource "azurerm_user_assigned_identity" "identity_uami" {
  location            = var.location
  name                = "appgw-uami"
  resource_group_name = module.resource_group_kubernetes_cluster[0].name
}

resource "azurerm_application_gateway" "network" {
  name                = var.app_gateway_name
  resource_group_name = module.resource_group_kubernetes_cluster[0].name
  location            = var.location

  sku {
    name     = var.app_gateway_sku
    tier     = "Standard_v2"
    capacity = 2
  }

  identity {
    type = "UserAssigned"
    identity_ids = [
      azurerm_user_assigned_identity.identity_uami.id
    ]
  }

  gateway_ip_configuration {
    name      = "appGatewayIpConfig"
    subnet_id = data.azurerm_subnet.appgateway-subnet.id
  }

  frontend_port {
    name = local.frontend_port_name
    port = 80
  }

  frontend_port {
    name = "httpsPort"
    port = 443
  }

  frontend_ip_configuration {
    name                 = local.frontend_ip_configuration_name
    public_ip_address_id = azurerm_public_ip.test.id
  }

  backend_address_pool {
    name = local.backend_address_pool_name
  }

  backend_http_settings {
    name                  = local.http_setting_name
    cookie_based_affinity = "Disabled"
    port                  = 80
    protocol              = "Http"
    request_timeout       = 1
  }

  http_listener {
    name                           = local.listener_name
    frontend_ip_configuration_name = local.frontend_ip_configuration_name
    frontend_port_name             = local.frontend_port_name
    protocol                       = "Http"
  }

  request_routing_rule {
    name                       = local.request_routing_rule_name
    rule_type                  = "Basic"
    http_listener_name         = local.listener_name
    backend_address_pool_name  = local.backend_address_pool_name
    backend_http_settings_name = local.http_setting_name
    priority                   = 100
  }

  tags = var.tags

  depends_on = [azurerm_public_ip.test]

  lifecycle {
    ignore_changes = [
      backend_address_pool,
      backend_http_settings,
      request_routing_rule,
      http_listener,
      probe,
      tags,
      frontend_port
    ]
  }
}

# Create the Azure Kubernetes Service (AKS) Cluster
resource "azurerm_kubernetes_cluster" "kubernetes_cluster" {
  count                         = var.enable_kubernetes == true ? 1 : 0
  name                          = "aks-prjx-${var.subscription_type}-${var.environment}-${var.location}-${var.instance_number}"    
  location                      = var.location
  resource_group_name           = module.resource_group_kubernetes_cluster[0].name  # "rg-aks-spoke-dev-westus3-001"
  dns_prefix                    = "dns-aks-prjx-${var.subscription_type}-${var.environment}-${var.location}-${var.instance_number}" #"dns-prjxcluster"
  private_cluster_enabled       = false
  local_account_disabled        = true

  default_node_pool {
    name                        = "npprjx${var.subscription_type}" #"prjxsyspool" # NOTE: "name must start with a lowercase letter, have max length of 12, and only have characters a-z0-9."
    vm_size                     = "Standard_B2ms"
    vnet_subnet_id              = data.azurerm_subnet.aks-subnet.id
    # zones                     = ["1", "2", "3"]
    enable_auto_scaling         = true
    max_count                   = 3
    min_count                   = 1
    # node_count                = 3
    os_disk_size_gb             = 50
    type                        = "VirtualMachineScaleSets"
    enable_node_public_ip       = false
    enable_host_encryption      = false

    node_labels = {
      "node_pool_type"          = "npprjx${var.subscription_type}"
      "node_pool_os"            = "linux"
      "environment"             = "${var.environment}"
      "app"                     = "prjx_${var.subscription_type}_app"
    }
    tags = var.tags
  }

  ingress_application_gateway {
    gateway_id = azurerm_application_gateway.network.id
  }

  # Enabled the cluster configuration to the Azure kubernets with RBAC
  azure_active_directory_role_based_access_control { 
    managed                     = true
    admin_group_object_ids      = var.active_directory_role_based_access_control_admin_group_object_ids
    azure_rbac_enabled          = true #false
  }

  network_profile {
    network_plugin              = "azure"
    network_policy              = "azure"
    outbound_type               = "userDefinedRouting"
  }

  # service_principal {
    # client_id                   = var.client_id
    # client_secret               = var.client_secret
  # }

  identity {
    type = "SystemAssigned"
  }  

  oms_agent {
    log_analytics_workspace_id  = module.log_analytics_workspace[0].id
  }

  timeouts {
    create = "20m"
    delete = "20m"
  }

  depends_on = [
    azurerm_application_gateway.network
  ]
}

# Get the AKS SystemAssigned Identity
data "azurerm_user_assigned_identity" "aks-identity" {
  name                = "${azurerm_kubernetes_cluster.kubernetes_cluster[0].name}-agentpool"
  resource_group_name = "MC_${module.resource_group_kubernetes_cluster[0].name}_aks-prjx-spoke-dev-eastus-001_eastus"

  depends_on          = [module.resource_group_kubernetes_cluster]  
}

# Provide ACR Pull permission to AKS SystemAssigned Identity
resource "azurerm_role_assignment" "acrpull_role" {
  scope                            = module.container_registry[0].id
  role_definition_name             = "AcrPull"
  principal_id                     = data.azurerm_user_assigned_identity.aks-identity.principal_id
  skip_service_principal_aad_check = true

  depends_on                       = [
    data.azurerm_user_assigned_identity.aks-identity
  ]
}

resource "azurerm_role_assignment" "aks_id_network_contributor_subnet" {
  scope                = data.azurerm_subnet.aks-subnet.id
  role_definition_name = "Network Contributor"
  principal_id         = data.azurerm_user_assigned_identity.aks-identity.principal_id

  depends_on = [data.azurerm_user_assigned_identity.aks-identity]
}

resource "azurerm_role_assignment" "aks_id_contributor_agw" {
  scope                = data.azurerm_subnet.appgateway-subnet.id
  role_definition_name = "Network Contributor"
  principal_id         = data.azurerm_user_assigned_identity.aks-identity.principal_id

  depends_on = [data.azurerm_user_assigned_identity.aks-identity]
}

resource "azurerm_role_assignment" "aks_ingressid_contributor_on_agw" {
  scope                            = azurerm_application_gateway.network.id
  role_definition_name             = "Contributor"
  principal_id                     = azurerm_kubernetes_cluster.kubernetes_cluster[0].ingress_application_gateway[0].ingress_application_gateway_identity[0].object_id
  depends_on                       = [azurerm_application_gateway.network,azurerm_kubernetes_cluster.kubernetes_cluster]
  skip_service_principal_aad_check = true
}

resource "azurerm_role_assignment" "aks_ingressid_contributor_on_uami" {
  scope                            = azurerm_user_assigned_identity.identity_uami.id
  role_definition_name             = "Contributor"
  principal_id                     = azurerm_kubernetes_cluster.kubernetes_cluster[0].ingress_application_gateway[0].ingress_application_gateway_identity[0].object_id
  depends_on                       = [azurerm_application_gateway.network,azurerm_kubernetes_cluster.kubernetes_cluster]
  skip_service_principal_aad_check = true
}

resource "azurerm_role_assignment" "uami_contributor_on_agw" {
  scope                            = azurerm_application_gateway.network.id
  role_definition_name             = "Contributor"
  principal_id                     = azurerm_user_assigned_identity.identity_uami.principal_id
  depends_on                       = [azurerm_application_gateway.network,azurerm_user_assigned_identity.identity_uami]
  skip_service_principal_aad_check = true
}
azure azure-aks
3个回答
2
投票

我已按照以下步骤创建带有应用程序网关的 Azure Kubernetes 服务集群:

按照文档使用terraform创建Kubernetes服务集群应用程序网关

Azure 门户中预配 AKS 集群后,请验证它。

Azure 门户 > 搜索 Kubernetes 服务 > 选择您的 Kubernetes 集群 > 网络。

enter image description here

我们尝试部署示例应用程序并使用服务类型负载均衡器

暴露到互联网

enter image description here

运行代码后成功创建资源。

注意:当您在 Service mainfest 文件中定义 type: Load Balancer 时,将会创建 Load Balancer

apiVersion: v1  
kind: Service  
metadata:  
 name: test-service  
spec:  
  type: LoadBalancer  
  ports:  
  - port: 80  
  selector:  
  app: testapp

假设您遇到错误待处理负载均衡器-公共 IP 是以下原因之一:

  1. 验证您已配置有效的服务主体。如果服务主体过期,集群将无法创建负载均衡器,并且服务的外部 IP仍处于待定状态。
  2. 确保您有足够的配额来为外部公共IP问题提供公共IP。

2
投票

我缺少权限并按如下所述修复了它

# Get the AKS SystemAssigned Identity
data "azuread_service_principal" "aks-sp" {
  display_name  = azurerm_kubernetes_cluster.kubernetes_cluster[0].name
}

resource "azurerm_role_assignment" "akssp_network_contributor_subnet" {
  scope                = data.azurerm_subnet.aks-subnet.id
  role_definition_name = "Network Contributor"
  principal_id         = data.azuread_service_principal.aks-sp.object_id

  depends_on = [data.azuread_service_principal.aks-sp]
}

0
投票

如果您的系统Kubernetes 服务的节点池无法配置,请通过创建系统备份节点池(作为临时措施)来重新创建它,然后删除失败的节点池,然后再次创建主节点池(例如,通过重新创建主节点池)运行 Terraform 或手动)。如果这次配置成功,删除新创建的备份节点。



© www.soinside.com 2019 - 2024. All rights reserved.