CephFS无法在Kubernetes上挂载

问题描述 投票:0回答:1

我按照官方文档设置了一个Ceph群集并使用sudo mount -t命令手动安装,然后检查了我的Ceph群集的状态-那里没有问题。现在我正在尝试在Kubernetes上安装我的CephFS,但是当我运行kubectl create命令时,我的容器卡在了ContainerCreating中,因为它无法安装。我在线查看了许多相关的问题/解决方案,但没有任何效果。

作为参考,我正在遵循本指南:https://medium.com/velotio-perspectives/an-innovators-guide-to-kubernetes-storage-using-ceph-a4b919f4e469

我的设置包含5个AWS实例,它们如下:

节点1:Ceph星期一

节点2:OSD1 + MDS

节点3:OSD2 + K8s主站

节点4:OSD3 + K8s Worker1

节点5:CephFS + K8s Worker2

将K8堆叠在与Ceph相同的实例上可以吗?我很确定这是允许的,但是如果不允许,请告诉我。

在describe pod日志中,这是错误/警告:

Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30 --scope -- mount -t ceph -o name=kubernetes-dynamic-user-4d05a2df-3639-11ea-b2d3-5a4147fda646,secret=AQC4whxeqQ9ZERADD2nUgxxOktLE1OIGXThBmw== 172.31.15.110:6789:/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-4d05a269-3639-11ea-b2d3-5a4147fda646 /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30
Output: Running scope as unit run-2382233.scope.
couldn't finalize options: -34

这些是我的.yaml文件:

供应商:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-provisioner-dt
  namespace: test-dt
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update", "create"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-provisioner-dt
  namespace: test-dt
subjects:
  - kind: ServiceAccount
    name: test-provisioner-dt
    namespace: test-dt
roleRef:
  kind: ClusterRole
  name: test-provisioner-dt
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: test-provisioner-dt
  namespace: test-dt
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---

StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: postgres-pv
  namespace: test-dt
provisioner: ceph.com/cephfs
parameters:
  monitors: 172.31.15.110:6789
  adminId: admin
  adminSecretName: ceph-secret-admin-dt
  adminSecretNamespace: test-dt
  claimRoot: /pvc-volumes

PVC:

apiVersion: v1
metadata:
  name: postgres-pvc
  namespace: test-dt
spec:
  storageClassName: postgres-pv
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi

kubectl get pvkubectl get pvc的输出表明已绑定并声明了卷,没有错误。供应者pod日志的输出全部显示成功/没有错误。

请帮助!

kubernetes ceph cephfs
1个回答
0
投票

我们有没有解决这个问题的方法?

© www.soinside.com 2019 - 2024. All rights reserved.