kubernetes动态PV与AWS处于未决

问题描述 投票:0回答:1

我创建与AWS-GP2持续量KUBE的Redis集群。我用redis-cluster.yml

我已经根据qazxsw POI创建存储类,动态持续的卷创建

这是我的StorageClass定义

doc

当我尝试创建群集创建卷停留在 kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: aws-gp2 provisioner: kubernetes.io/aws-ebs parameters: type: gp2 zones: us-west-2a, us-west-2b, us-west-2c fsType: ext4 reclaimPolicy: Retain allowVolumeExpansion: true 状态,检查日志中找到此之后

pending

和事件

  $ kubectl -n staging describe pvc data-redis-cluster-0
  Name:          data-redis-cluster-0
  Namespace:     staging
  StorageClass:
  Status:        Pending
  Volume:
  Labels:        app=redis-cluster
  Annotations:   <none>
  Finalizers:    [kubernetes.io/pvc-protection]
  Capacity:
  Access Modes:
  Events:
    Type    Reason         Age                From                         Message
    ----    ------         ----               ----                         -------
    Normal  FailedBinding  13s (x11 over 2m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

有人指出,这里有什么问题?

kubernetes amazon-eks kubernetes-pvc aws-eks
1个回答
1
投票

由于群集没有默认 $ kubectl -n staging get events LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 10s 10s 1 redis-cluster.15816c6dc1d6c03a StatefulSet Normal SuccessfulCreate statefulset-controller create Claim data-redis-cluster-0 Pod redis-cluster-0 in StatefulSet redis-cluster success 10s 10s 1 redis-cluster.15816c6dc2226fe0 StatefulSet Normal SuccessfulCreate statefulset-controller create Pod redis-cluster-0 in StatefulSet redis-cluster successful 8s 10s 3 data-redis-cluster-0.15816c6dc1dfd0cb PersistentVolumeClaim Normal FailedBinding persistentvolume-controller no persistent volumes available for this claim and no storage class is set 3s 10s 5 redis-cluster-0.15816c6dc229258d Pod Warning FailedScheduling default-scheduler pod has unbound PersistentVolumeClaims (repeated 4 times) ,我不得不添加StorageClassstorageClassName: aws-gp2,这帮助我解决这个问题

像这样

volumeClaimTemplates
© www.soinside.com 2019 - 2024. All rights reserved.