我创建了一个 k8s 集群,所以现在我必须向我的节点添加第二个磁盘
这是我的节点组配置resource "aws_eks_node_group" "node_group" {
cluster_name = aws_eks_cluster.pm_eks.name
node_group_name = "${var.appConfig_subdomain}-NodeGroup"
node_role_arn = aws_iam_role.node-group-iam-role.arn
subnet_ids = [
aws_subnet.node-a_subnet.id,
aws_subnet.node-b_subnet.id,
aws_subnet.node-c_subnet.id
]
instance_types = [
"${var.aws_node_size}"
]
disk_size = "${var.aws_node_diskSize}"
scaling_config {
desired_size = 3
max_size = 5
min_size = 3
}
remote_access {
ec2_ssh_key = aws_key_pair.nodegroup-kp.key_name
source_security_group_ids = [
aws_security_group.allow_dmz.id
]
}
provisioner "local-exec" {
command = "echo '${tls_private_key.nodegroup-kp.private_key_pem}' > ./${var.appConfig_subdomain}-${var.appConfig_env}-nodegroup.pem"
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
]
}
请帮助我
提前致谢
问候
为什么需要向节点添加第二个磁盘,这在 Kubernetes 中不是理想的方式,您应该在 Kubernetes 中查看
PersistentVolumeClaims
和 PersistentVolumes
,它会自动为您各自的 Pod 添加一个磁盘并将其附加到Pod 运行的相应节点。但不建议在worker节点添加磁盘。