私人托管的Kubernetes存储?

问题描述 投票:1回答:2

我正在寻找Kubernetes存储的解决方案,我可以将我的UnRaid服务器用作我的Kubernetes集群的存储。有人做过这样的事吗?

任何帮助将非常感激。

谢谢,杰米

kubernetes storage
2个回答
2
投票

可能唯一的方法是使用它NFS Volume。这个link让您了解如何安装Unraid NFS共享。

然后你可以按照Kubernetes example关于如何在Pod中使用NFS卷。

基本上,您的Unraid服务器将具有IP地址,然后您可以使用Pod上的该IP地址安装卷/路径。 For example

kind: Pod
apiVersion: v1
metadata:
  name: pod-using-nfs
spec:
  # Add the server as an NFS volume for the pod
  volumes:
    - name: nfs-volume
      nfs: 
        # URL for the NFS server
        server: 10.108.211.244 # Change this!
        path: /

  # In this container, we'll mount the NFS volume
  # and write the date to a file inside it.
  containers:
    - name: app
      image: alpine

      # Mount the NFS volume in the container
      volumeMounts:
        - name: nfs-volume
          mountPath: /var/nfs

      # Write to a file inside our NFS
      command: ["/bin/sh"]
      args: ["-c", "while true; do date >> /var/nfs/dates.txt; sleep 5; done"]

如果您愿意,也可以使用PVCFor example

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.108.211.244 # Change this!
    path: "/"

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 10G

然后在Deployment或Pod定义中使用它:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-busybox
spec:
  replicas: 1
  selector:
    matchLabels:
      name: nfs-busybox
  template:
    metadata:
      labels:
        name: nfs-busybox
    spec:
      containers:
      - image: busybox
        imagePullPolicy: Always
        name: busybox
        volumeMounts:
          # name must match the volume name below
          - name: my-pvc-nfs
            mountPath: "/mnt"
      volumes:
      - name: my-pvc-nfs
        persistentVolumeClaim:
          claimName: nfs

0
投票

你可以使用ceph。我使用它,它对我帮助很大。您可以从存储中构建群集并定义复制。您可以通过ceph使用增量备份和快照

© www.soinside.com 2019 - 2024. All rights reserved.