Elasticsearch Kubernetes中的高可用性设置

问题描述 投票:2回答:1

我们想在Kubernetes中设置Elasticsearch Highly Available Setup。我们想部署下面的对象,并希望独立扩展它们

  1. 主荚
  2. 数据包
  3. 客户端窗格

如果您已实施此类设置,请分享您的建议。最好使用开源工具

elasticsearch kubernetes high-availability
1个回答
1
投票

请参阅下面的建议架构的一些要点:

  1. Elasticsearch主节点不需要持久存储,因此使用Deployment来管理这些节点。使用服务在主服务器之间进行负载平衡。

使用ConfigMap管理其设置。像这样的东西:

apiVersion: v1
  kind: Service
  metadata:
    name: elasticsearch-discovery
    labels:
      component: elasticsearch
      role: master
      version: v6.5.0 // or whatever version you require
  spec:
    selector:
      component: elasticsearch
      role: master
      version: v6.5.0
    ports:
      - name: transport
        port: 9300 // no need to expose port 9200, as master nodes don't need it
        protocol: TCP
    clusterIP: None
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: elasticsearch-master-configmap
data:
  elasticsearch.yml: |
    # these should get you going
    # if you want more fine-grained control, feel free to add other ES settings
    cluster.name: "${CLUSTER_NAME}"
    node.name: "${NODE_NAME}"

    network.host: 0.0.0.0

    # (no_master_eligible_nodes / 2) + 1
    discovery.zen.minimum_master_nodes: 2
    discovery.zen.ping.unicast.hosts: ${DISCOVERY_SERVICE}

    node.master: true
    node.data: false
    node.ingest: false
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: elasticsearch-master
  labels:
    component: elasticsearch
    role: master
    version: v6.5.0
spec:
  replicas: 3 // 3 is the recommended minimum
  template:
    metadata:
      labels:
        component: elasticsearch
        role: master
        version: v6.5.0
    spec:
      affinity:
        // you can also add node affinity in case you have a specific node pool
        podAntiAffinity:
          // make sure 2 ES processes don't end up on the same machine
          requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                matchExpressions:
                  - key: component
                    operator: In
                    values:
                      - elasticsearch
                  - key: role
                    operator: In
                    values:
                      - master
                topologyKey: kubernetes.io/hostname
      initContainers:
        # just basic ES environment configuration
        - name: init-sysctl
          image: busybox:1.27.2
          command:
            - sysctl
            - -w
            - vm.max_map_count=262144
          securityContext:
            privileged: true
      containers:
        - name: elasticsearch-master
          image: // your preferred image
          imagePullPolicy: Always
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: CLUSTER_NAME
              value: elasticsearch-cluster
            - name: DISCOVERY_SERVICE
              value: elasticsearch-discovery
            - name: ES_JAVA_OPTS
              value: -Xms256m -Xmx256m // or more, if you want
          ports:
            - name: tcp-transport
              containerPort: 9300
          volumeMounts:
            - name: configmap
              mountPath: /etc/elasticsearch/elasticsearch.yml
              subPath: elasticsearch.yml
            - name: storage
              mountPath: /usr/share/elasticsearch/data
      volumes:
        - name: configmap
          configMap:
            name: elasticsearch-master-configmap
        - emptyDir:
            medium: ""
          name: storage

客户端节点也可以以非常类似的方式部署,因此我将避免为此添加代码。

  1. 数据节点有点特殊:您需要配置持久存储,因此您必须使用StatefulSets。使用PersistentVolumeClaims为这些pod创建磁盘。我会做这样的事情:
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  labels:
    component: elasticsearch
    role: data
    version: v6.5.0
spec:
  ports:
    - name: http
      port: 9200 # in this example, data nodes are being used as client nodes
    - port: 9300
      name: transport
  selector:
    component: elasticsearch
    role: data
    version: v6.5.0
  type: ClusterIP
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: elasticsearch-data-configmap
data:
  elasticsearch.yml: |
  cluster.name: "${CLUSTER_NAME}"
  node.name: "${NODE_NAME}"

  network.host: 0.0.0.0

  # (no_master_eligible_nodes / 2) + 1
  discovery.zen.minimum_master_nodes: 2
  discovery.zen.ping.unicast.hosts: ${DISCOVERY_SERVICE}

  node.master: false
  node.data: true
  node.ingest: false
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-data
  labels:
    component: elasticsearch
    role: data
    version: v6.5.0
spec:
  serviceName: elasticsearch
  replicas: 1 # choose the appropriate number
  selector:
    matchLabels:
      component: elasticsearch
      role: data
      version: v6.5.0
  template:
    metadata:
      labels:
        component: elasticsearch
        role: data
        version: v6.5.0
    spec:
      affinity:
        # again, I recommend using nodeAffinity
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: component
                    operator: In
                    values:
                      - elasticsearch
                  - key: role
                    operator: In
                    values:
                      - data
              topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 180
      initContainers:
        - name: init-sysctl
          image: busybox:1.27.2
          command:
            - sysctl
            - -w
            - vm.max_map_count=262144
          securityContext:
           privileged: true
      containers:
        - name: elasticsearch-production-container
          image: .search the same image that you use for the master node
          imagePullPolicy: Always
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: CLUSTER_NAME
              value: elasticsearch-cluster
            - name: DISCOVERY_SERVICE
              value: elasticsearch-discovery
            - name: ES_JAVA_OPTS
              value: -Xms31g -Xmx31g # do not exceed 32 GB!!!
          ports:
            - name: http
              containerPort: 9200
            - name: tcp-transport
              containerPort: 9300
          volumeMounts:
            - name: configmap
              mountPath: /etc/elasticsearch/elasticsearch.yml
              subPath: elasticsearch.yml
            - name: elasticsearch-node-pvc
              mountPath: /usr/share/elasticsearch/data
          readinessProbe:
            httpGet:
              path: /_cluster/health?local=true
              port: 9200
            initialDelaySeconds: 15
          livenessProbe:
            exec:
              command:
                - /usr/bin/pgrep
                - -x
                - "java"
            initialDelaySeconds: 15
          resources:
            requests:
              # adjust these as per your needs
              memory: "32Gi"
              cpu: "11"
      volumes:
        - name: configmap
          configMap:
            name: elasticsearch-data-configmap
  volumeClaimTemplates:
    - metadata:
        name: elasticsearch-node-pvc
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: # this is dependent on your K8s environment
        resources:
          requests:
            storage: 350Gi # choose the desired storage size for each ES data node

希望这可以帮助!

© www.soinside.com 2019 - 2024. All rights reserved.