Flink 任务管理器使用 Flink Kubernetes Operator 处理卷“hadoop-config-volume”失败

问题描述 投票:0回答:3

我正在使用 Flink Kubernetes Operator 版本 1.1.0 开发应用程序,但在生成的任务管理器 Pod 中收到以下错误消息:

MountVolume.SetUp failed for volume "hadoop-config-volume" :  "hadoop-config-name" not found

Unable to attach or mount volumes: unmounted volumes=[hadoop-config-volume], unattached volumes=[hadoop-xml hadoop-config-volume flink-config-volume flink-token-kk558]: timed out waiting for the condition

我的链接app.yaml

      apiVersion: flink.apache.org/v1beta1
      kind: FlinkDeployment
      metadata: 
        name: "${DEPLOYMENT_NAME}"
        namespace: data
      spec: 
        flinkConfiguration:
          taskmanager.numberOfTaskSlots: "2"
        flinkVersion: v1_15
        image: "${IMAGE_TAG}"
        imagePullPolicy: Always
        job: 
          jarURI: local:///opt/flink/opt/executable.jar
          parallelism: 2
          state: running
          upgradeMode: stateless
        jobManager: 
          resource: 
            cpu: 1
            memory: 1024m
        podTemplate: 
          apiVersion: v1
          kind: Pod
          metadata: 
            namespace: bigdata
          spec: 
            containers:
              - 
                env:
                  -
                    name: HADOOP_CONF_DIR
                    value: /hadoop/conf
                envFrom: 
                  - 
                    configMapRef: 
                      name: data-connection
                name: flink-main-container
                volumeMounts: 
                  - 
                    mountPath: /hadoop/conf
                    name: hadoop-xml
            imagePullSecrets: 
              - 
                name: registry
            serviceAccount: flink
            volumes:
              - 
                configMap: 
                  name: hadoop-conf
                name: hadoop-xml
        serviceAccount: flink
        taskManager: 
          resource: 
            cpu: 2
            memory: 5000m

从文档中,我相信hadoop-config-name是由flink创建的内部configmap,用于将hdfs配置发送到taskmanager。我已经安装了我的配置映射(包含 *core-site.xml”和“hdfs-site.xml”到 $HADOOP_CONF_DIR 目录)。

这是一个 flink bug 还是我的设置有问题?

kubernetes hadoop apache-flink configmap
3个回答
0
投票

对于面临同样问题的任何人,我通过更改 HADOOP_CONF_DIR -> HADOOP_CLASSPATH 修复了它!


0
投票

Flink 将检测 env

HADOOP_CONF_DIR
并创建一个 hadoop conf 配置映射(如果存在)(请参阅 https://github.com/apache/flink/blob/2851fac9c4c052876c80440b6b0b637603de06ea/flink-kubernetes/src/main/java/org/apache /flink/kubernetes/kubeclient/decorators/HadoopConfMountDecorator.java#L86)。

我猜您遇到了错误,因为操作员无法访问

HADOOP_CONF_DIR


0
投票

在 flink 1.17.0 之后,如果将 flink config

kubernetes.decorator.hadoop-conf-mount.enabled
设置为 false,则可以毫无问题地使用 HADOOP_CONF_DIR 环境变量。

链接:文档 / 拉取请求

© www.soinside.com 2019 - 2024. All rights reserved.