安装kong-ingress-controller来管理kubernetes上的入口

问题描述 投票:2回答:1

我在我的AKS集群上安装kong ingress controller,但我不希望在我的集群中安装postgres Statefulset服务。相反,我在我的天蓝色基础设施中有一个postgres数据库,我希望从我的kong-ingress-controller deplopyment连接它,创建postgres凭据,如我的aks集群中的秘密,并将其存储在环境变量中。

我创造了这个秘密

⟩ kubectl create secret generic az-pg-db-user-pass --from-literal=username='az-pg-username' --from-literal=password='az-pg-password' --namespace kong 
secret/az-pg-db-user-pass created

在我的kongwithingress.yaml文件中,我有部署清单声明,我确实想要呈现from this gist link,以便不填充很多yaml代码行的正文问题。

这个要点基于这个AKS部署一体化,但由于以前的原因删除了像StatefulsetService的postgres,我的目标是与我自己的azure托管postgres服务建立连接

我已经在az-pg-db-user-pass中创建了kong-ingress-controller deployment通用秘密,我的kong deployment和我的kong-migrations job在我的整个gist脚本中呈现,以便创建一个环境变量,如下所示:

KONG_PG_USERNAME
KONG_PG_PASSWORD

这些环境变量已经创建并作为kong-ingress-controller deploymentkong deployment以及kong-migrations job中的秘密引用,需要访问或连接postgres数据库

当我执行kubectl apply -f kongwithingres.yaml命令时,我得到以下输出:

kong-ingress-controller deploymentkong deploymentkong-migrations job成功创建。

⟩ kubectl apply -f kongwithingres.yaml 
namespace/kong unchanged
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com unchanged
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com unchanged
serviceaccount/kong-serviceaccount unchanged
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole unchanged
role.rbac.authorization.k8s.io/kong-ingress-role unchanged
rolebinding.rbac.authorization.k8s.io/kong-ingress-role-nisa-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding unchanged
service/kong-ingress-controller created
deployment.extensions/kong-ingress-controller created
service/kong-proxy created
deployment.extensions/kong created
job.batch/kong-migrations created
[I] 

但他们各自的豆荚具有CrashLoopBackOff状态

NAME                                          READY   STATUS                  RESTARTS   AGE
pod/kong-d8b88df99-j6hvl                      0/1     Init:CrashLoopBackOff   5          4m24s
pod/kong-ingress-controller-984fc9666-cd2b5   0/2     Init:CrashLoopBackOff   5          4m24s
pod/kong-migrations-t6n7p                     0/1     CrashLoopBackOff        5          4m24s

我正在检查每个pod的相应日志,我发现了这个:

pod/kong-d8b88df99-j6hvl

⟩ kubectl logs pod/kong-d8b88df99-j6hvl -p -n kong 
Error from server (BadRequest): previous terminated container "kong-proxy" in pod "kong-d8b88df99-j6hvl" not found

在他们的描述信息中,这个pod正在获取环境变量和图像

⟩ kubectl describe pod/kong-d8b88df99-j6hvl -n kong
Name:               kong-d8b88df99-j6hvl
Namespace:          kong

Status:             Pending
IP:                 10.244.1.18
Controlled By:      ReplicaSet/kong-d8b88df99
Init Containers:
  wait-for-migrations:
    Container ID:  docker://7007a89ada215daf853ec103d79dca60ccc5fb3a14c51ac6c5c56655da6da62f
    Image:         kong:1.0.0
    Image ID:      docker-pullable://kong@sha256:8fd6a312d7715a9cc85c49625a4c2f53951f6e4422926091e4d2ae67c480b6d5
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      kong migrations list
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 26 Feb 2019 16:25:01 +0100
      Finished:     Tue, 26 Feb 2019 16:25:01 +0100
    Ready:          False
    Restart Count:  6
    Environment:
      KONG_ADMIN_LISTEN:      off
      KONG_PROXY_LISTEN:      off
      KONG_PROXY_ACCESS_LOG:  /dev/stdout
      KONG_ADMIN_ACCESS_LOG:  /dev/stdout
      KONG_PROXY_ERROR_LOG:   /dev/stderr
      KONG_ADMIN_ERROR_LOG:   /dev/stderr
      KONG_PG_HOST:           zcrm365-postgresql1.postgres.database.azure.com
      KONG_PG_USERNAME:       <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:       <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gnkjq (ro)
Containers:
  kong-proxy:
    Container ID:   
    Image:          kong:1.0.0
    Image ID:       
    Ports:          8000/TCP, 8443/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      KONG_PG_USERNAME:              <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:              <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_HOST:                  zcrm365-postgresql1.postgres.database.azure.com
      KONG_PROXY_ACCESS_LOG:         /dev/stdout
      KONG_PROXY_ERROR_LOG:          /dev/stderr
      KONG_ADMIN_LISTEN:             off
      KUBERNETES_PORT_443_TCP_ADDR:  zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gnkjq (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-gnkjq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gnkjq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From                             Message
  ----     ------     ----                    ----                             -------
  Normal   Scheduled  8m44s                   default-scheduler                Successfully assigned kong/kong-d8b88df99-j6hvl to aks-default-75800594-1
  Normal   Pulled     7m9s (x5 over 8m40s)    kubelet, aks-default-75800594-1  Container image "kong:1.0.0" already present on machine
  Normal   Created    7m8s (x5 over 8m40s)    kubelet, aks-default-75800594-1  Created container
  Normal   Started    7m7s (x5 over 8m40s)    kubelet, aks-default-75800594-1  Started container
  Warning  BackOff    3m34s (x26 over 8m38s)  kubelet, aks-default-75800594-1  Back-off restarting failed container

pod/kong-ingress-controller-984fc9666-cd2b5

 kubectl logs pod/kong-ingress-controller-984fc9666-cd2b5 -p -n kong 
Error from server (BadRequest): a container name must be specified for pod kong-ingress-controller-984fc9666-cd2b5, choose one of: [admin-api ingress-controller] or one of the init containers: [wait-for-migrations]
[I]

和他们各自的描述

⟩ kubectl describe pod/kong-ingress-controller-984fc9666-cd2b5 -n kong
Name:               kong-ingress-controller-984fc9666-cd2b5
Namespace:          kong

Status:             Pending
IP:                 10.244.2.18
Controlled By:      ReplicaSet/kong-ingress-controller-984fc9666
Init Containers:
  wait-for-migrations:
    Container ID:  docker://8eb035f755322b3ac72792d922974811933ba9a71afb1f4549cfe7e0a6519619
    Image:         kong:1.0.0
    Image ID:      docker-pullable://kong@sha256:8fd6a312d7715a9cc85c49625a4c2f53951f6e4422926091e4d2ae67c480b6d5
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      kong migrations list
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 26 Feb 2019 16:29:56 +0100
      Finished:     Tue, 26 Feb 2019 16:29:56 +0100
    Ready:          False
    Restart Count:  7
    Environment:
      KONG_ADMIN_LISTEN:      off
      KONG_PROXY_LISTEN:      off
      KONG_PROXY_ACCESS_LOG:  /dev/stdout
      KONG_ADMIN_ACCESS_LOG:  /dev/stdout
      KONG_PROXY_ERROR_LOG:   /dev/stderr
      KONG_ADMIN_ERROR_LOG:   /dev/stderr
      KONG_PG_HOST:           zcrm365-postgresql1.postgres.database.azure.com
      KONG_PG_USERNAME:       <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:       <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
Containers:
  admin-api:
    Container ID:   
    Image:          kong:1.0.0
    Image ID:       
    Port:           8001/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:8001/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8001/status delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      KONG_PG_USERNAME:              <set to the key 'username' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_PASSWORD:              <set to the key 'password' in secret 'az-pg-db-user-pass'>  Optional: false
      KONG_PG_HOST:                  zcrm365-postgresql1.postgres.database.azure.com
      KONG_ADMIN_ACCESS_LOG:         /dev/stdout
      KONG_ADMIN_ERROR_LOG:          /dev/stderr
      KONG_ADMIN_LISTEN:             0.0.0.0:8001, 0.0.0.0:8444 ssl
      KONG_PROXY_LISTEN:             off
      KUBERNETES_PORT_443_TCP_ADDR:  zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
  ingress-controller:
    Container ID:  
    Image:         kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.3.0
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Args:
      /kong-ingress-controller
      --kong-url=https://localhost:8444
      --admin-tls-skip-verify
      --default-backend-service=kong/kong-proxy
      --publish-service=kong/kong-proxy
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:10254/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:                      kong-ingress-controller-984fc9666-cd2b5 (v1:metadata.name)
      POD_NAMESPACE:                 kong (v1:metadata.namespace)
      KUBERNETES_PORT_443_TCP_ADDR:  zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://zcrm365-d73ab78d.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       zcrm365-d73ab78d.hcp.westeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-rc4sp (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kong-serviceaccount-token-rc4sp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kong-serviceaccount-token-rc4sp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                   From                             Message
  ----     ------     ----                  ----                             -------
  Normal   Scheduled  12m                   default-scheduler                Successfully assigned kong/kong-ingress-controller-984fc9666-cd2b5 to aks-default-75800594-2
  Normal   Pulled     10m (x5 over 12m)     kubelet, aks-default-75800594-2  Container image "kong:1.0.0" already present on machine
  Normal   Created    10m (x5 over 12m)     kubelet, aks-default-75800594-2  Created container
  Normal   Started    10m (x5 over 12m)     kubelet, aks-default-75800594-2  Started container
  Warning  BackOff    2m14s (x49 over 12m)  kubelet, aks-default-75800594-2  Back-off restarting failed container
[I] 
~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments±)
⟩ 

我不知道CrashLoopBackOff状态及其状态各自为qazxsw poi的原因

如何调试此行为? Kong有可能无法与Postgres数据库交谈吗?

我的AKS集群在Azure上,也是我的postgres数据库,他们将通信作为服务。

UPDATE

这些是我创建的容器窗格的日志:

Waiting: PodInitiazing
⟩ kubectl logs pod/kong-ingress-controller-984fc9666-w4vvn -p -n kong -c ingress-controller



Error from server (BadRequest): previous terminated container "ingress-controller" in pod "kong-ingress-controller-984fc9666-w4vvn" not found
[I] 
postgresql yaml kubernetes-ingress azure-kubernetes
1个回答
3
投票

我的⟩ kubectl logs pod/kong-d8b88df99-qsq4j -p -n kong -c kong-proxy Error from server (BadRequest): previous terminated container "kong-proxy" in pod "kong-d8b88df99-qsq4j" not found [I] ~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments±) ⟩ 部署吊舱是kong-ingress-controller,有时在CrashLoopBackOff,因为我没有想到一些事情如下:

  • 主要原因,如Waiting: PodInitiazing@Amityokong-ingress-controllerkong称为 - 等待迁移等待init-container工作才被执行。在这里,我可以确定必须执行我的kong迁移
  • 但我的kong-migrations工作没有工作,因为我没有设置连接的kong-migrations环境变量参数。
  • 我的部署无法工作的其他原因是因为kong内部与postgres连接可能会等待容器中定义的用户环境变量被称为KONG_DATABASE。我被称为KONG_PG_USER,这是我的脚本执行失败的另一个原因。 (我不完全确定这个)
KONG_PG_USERNAME

顺便说一句,从kong开始我建议安装⟩ kubectl create -f kongwithingres.yaml namespace/kong created secret/az-pg-db-user-pass created customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created serviceaccount/kong-serviceaccount created clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole created role.rbac.authorization.k8s.io/kong-ingress-role created rolebinding.rbac.authorization.k8s.io/kong-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding created service/kong-ingress-controller created deployment.extensions/kong-ingress-controller created service/kong-proxy created deployment.extensions/kong created job.batch/kong-migrations created [I] ~/workspace/ZCRM365/Deployments/Kubernetes/kong · (Deployments) 这是一个前端仪表板工具来管理kong并检查我们可以通过konga文件制作的东西。

我们将这个yaml脚本安装为在我们的kubernetes集群中进行部署

konga.yaml

而且,我们可以通过apiVersion: apps/v1beta1 kind: Deployment metadata: name: konga namespace: kong spec: replicas: 1 template: metadata: labels: app: konga spec: containers: - env: - name: NODE_TLS_REJECT_UNAUTHORIZED value: "0" image: pantsel/konga:latest name: konga ports: - containerPort: 1337 命令在我们的机器上本地启动服务

kubectl port-forward
  • 我们要去⟩ kubectl port-forward pod/konga-85b66cffff-mxq85 1337:1337 -n kong Forwarding from 127.0.0.1:1337 -> 1337 Forwarding from [::1]:1337 -> 1337 ,注册管理员用户使用konga是必要的。
  • 使用以下kong admin URL http://localhost:1337/创建kong连接(仪表板中的Connections)
© www.soinside.com 2019 - 2024. All rights reserved.