如何仅收集操作员设置的日志?

问题描述 投票:0回答:1

我正在尝试帮助客户为其 kubernetes 设置仅启用日志收集。我们有文档显示了 Helm 图表的这一点,但对于操作员来说,我不确定这将在他们的部署文件中的位置。客户的部署文件是:

api版本:apps/v1 种类:部署 元数据: 名称:新门窗 命名空间:kelvin-application 规格: 选择器: 匹配标签: 应用程序:new-fenestra 模板: 元数据: 标签: 应用程序:new-fenestra app.kubernetes.io/instance:new-fenestra 注释: # datadog 容器自动发现 ad.datadoghq.com/new-fenestra.logs: '[{"source": "new-fenestra", "service": "new-fenestra"}]' 规格: # kubernetes_service_account.kube_sa serviceAccountName:应用程序集群-ksa 容器: - 名称:新窗孔 图像:us-central1-docker.pkg.dev/kelvin-application-replacewithenvironment/aurelian/new-fenestra:image-tag-replacement 命令: - /应用程序/条目 - “开始” 端口: - 集装箱端口:4000 资源: 限制: 内存:“1Gi” 中央处理器:“500m” 环境: - 名称:MIX_ENV 值:“生产” - 名称:网址 值来自: 秘密密钥参考: 名称:新门窗的秘密 关键:主机网址

  - name: cloud-sql-proxy
    image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.1.0
    resources:
      limits:
        memory: "512Mi"
        cpu: "250m"
    args:
      - "--private-ip"
      - "--structured-logs"
      - "--port=5432"
      - "kelvin-application-replacewithenvironment:us-central1:aurelianvpc"

    securityContext:
      runAsNonRoot: true

  - name: otel-collector
    image: otel/opentelemetry-collector-contrib:0.71.0
    resources:
      limits:
        memory: "256Mi"
        cpu: "250m"
    args:
    - --config=/conf/collector.yaml
    env:
    - name: DATADOG_API_KEY
      valueFrom:
        secretKeyRef:
          name: new-fenestra-secrets
          key: datadog-api-key
    volumeMounts:
    - mountPath: /conf
      name: otel-collector-config
  volumes:
  - configMap:
      items:
      - key: collector.yaml
        path: collector.yaml
      name: otel-collector-config
    name: otel-collector-config

这是代理部署文件,其中包含我们在文档中推荐的环境变量:

kind: DatadogAgent
apiVersion: datadoghq.com/v2alpha1
metadata:
  name: datadog-agent
  namespace: kelvin-application
spec:
  global:
    site: us5.datadoghq.com
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key
      appSecret:
        secretName: datadog-secret
        keyName: app-key
  override:
    clusterAgent:
      image:
        name: gcr.io/datadoghq/cluster-agent:latest
    nodeAgent:
      image:
        name: gcr.io/datadoghq/agent:latest
      env:
        - name: DD_ENABLE_PAYLOADS_EVENTS
          value: "false"
        - name: DD_ENABLE_PAYLOADS_SERIES
          value: "false"
        - name: DD_ENABLE_PAYLOADS_SERVICE_CHECKS
          value: "false"
        - name: DD_ENABLE_PAYLOADS_SKETCHES
          value: "false"

  features:
    logCollection:
      enabled: true
      containerCollectAll: false

我尝试在代理文件中添加环境变量

kubernetes containers operator-keyword
1个回答
0
投票

请使用下面更新的代码来修复您的问题(希望如此!)

apiVersion: datadoghq.com/v2alpha1
metadata:
  name: datadog-agent
  namespace: kelvin-application
spec:
  global:
    site: us5.datadoghq.com
    credentials:
      apiSecret:
        secretName: datadog-secret
        keyName: api-key
      appSecret:
        secretName: datadog-secret
        keyName: app-key
  override:
    clusterAgent:
      image:
        name: gcr.io/datadoghq/cluster-agent:latest
    nodeAgent:
      image:
        name: gcr.io/datadoghq/agent:latest
      env:
        - name: DD_ENABLE_PAYLOADS_EVENTS
          value: "false"
        - name: DD_ENABLE_PAYLOADS_SERIES
          value: "false"
        - name: DD_ENABLE_PAYLOADS_SERVICE_CHECKS
          value: "false"
        - name: DD_ENABLE_PAYLOADS_SKETCHES
          value: "false"

  features:
    logCollection:
      enabled: true
      containerCollectAll: false
      containers:
        - name: new-fenestra
          service: new-fenestra```

You forgot to configure logCollection and specify which containers to collect logs from.

In the features section, set logCollection.enabled to true to enable log collection. By default, containerCollectAll is set to false, which means logs will only be collected from the specified containers.

Under features.logCollection.containers, add the containers you want to collect logs from. In this case, you can add the new-fenestra container and specify the service as new-fenestra.

Make sure to apply the modified DatadogAgent deployment file to your Kubernetes cluster for the changes to take effect.

After completing these steps, it should work if not just let me know!
© www.soinside.com 2019 - 2024. All rights reserved.