如果我添加端口部分,网络策略将停止工作

问题描述 投票:0回答:1

我有两个命名空间: 源 ns 和目标 ns 我想允许端口 80 上从源 ns 到目标 ns 的所有流量。

这项政策效果很好:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-80
  namespace: destination-ns
spec:
  podSelector: {}
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          access-to-destination-ns: allow
  - from:
    - podSelector: {}

从源命名空间运行测试:

nc -vz 服务.目的地-ns 80
service.destination-ns (10.1.2.221:80) 打开

但是,如果我尝试将其锁定到特定端口,它就会停止工作:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-80
  namespace: destination-ns
spec:
  podSelector: {}
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          access-to-destination-ns: allow
    ports:
    - protocol: TCP
      port: 80
  - from:
    - podSelector: {}

从源命名空间运行测试:

nc -vz 服务.目的地-ns 80
nc: service.destination-ns (10.1.2.221:80): 连接超时

使用 azure SNI 和 calico、单 Linux 节点池、单节点在 AKS 上运行。

服务规格:

spec:
  clusterIP: 10.1.2.221
  clusterIPs:
  - 10.1.2.221
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  type: ClusterIP
  ...

由于此复制场景似乎按预期工作,因此在我无法使其工作的地方发布我的实际资源:

源命名空间:

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    argocd.argoproj.io/sync-options: Delete=false
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"argocd.argoproj.io/sync-options":"Delete=false"},"labels":{"argocd.argoproj.io/instance":"namespaces","istio-injection":"disabled","owner":"data"},"name":"airbyte"}}
  creationTimestamp: "2024-04-10T10:28:36Z"
  labels:
    argocd.argoproj.io/instance: namespaces
    istio-injection: disabled
    kubernetes.io/metadata.name: airbyte
    owner: data
  name: airbyte
  resourceVersion: "9947279"
  uid: 5bc8e393-de77-4e3d-9eb5-96dedf8fceb1
spec:
  finalizers:
  - kubernetes
status:
  phase: Active
aurimasnavardauskas@LPKM-AURNAV-01 ~ % k get namespace bitbucket-runners -oyaml
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    argocd.argoproj.io/sync-options: Delete=false
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"argocd.argoproj.io/sync-options":"Delete=false"},"labels":{"access-to-airbyte":"allow","argocd.argoproj.io/instance":"namespaces","istio-injection":"disabled","owner":"infrastructure"},"name":"bitbucket-runners"}}
  creationTimestamp: "2024-04-12T12:50:07Z"
  labels:
    access-to-airbyte: allow
    argocd.argoproj.io/instance: namespaces
    istio-injection: disabled
    kubernetes.io/metadata.name: bitbucket-runners
    owner: infrastructure
  name: bitbucket-runners
  resourceVersion: "12256074"
  uid: 7fae10ad-0ed8-4d27-bbe1-fc2884e18164
spec:
  finalizers:
  - kubernetes
status:
  phase: Active

目标命名空间:

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    argocd.argoproj.io/sync-options: Delete=false
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"argocd.argoproj.io/sync-options":"Delete=false"},"labels":{"access-to-airbyte":"allow","argocd.argoproj.io/instance":"namespaces","istio-injection":"disabled","owner":"infrastructure"},"name":"bitbucket-runners"}}
  creationTimestamp: "2024-04-12T12:50:07Z"
  labels:
    access-to-airbyte: allow
    argocd.argoproj.io/instance: namespaces
    istio-injection: disabled
    kubernetes.io/metadata.name: bitbucket-runners
    owner: infrastructure
  name: bitbucket-runners
  resourceVersion: "12256074"
  uid: 7fae10ad-0ed8-4d27-bbe1-fc2884e18164
spec:
  finalizers:
  - kubernetes
status:
  phase: Active
aurimasnavardauskas@LPKM-AURNAV-01 ~ % k get namespace airbyte -oyaml          
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    argocd.argoproj.io/sync-options: Delete=false
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"argocd.argoproj.io/sync-options":"Delete=false"},"labels":{"argocd.argoproj.io/instance":"namespaces","istio-injection":"disabled","owner":"data"},"name":"airbyte"}}
  creationTimestamp: "2024-04-10T10:28:36Z"
  labels:
    argocd.argoproj.io/instance: namespaces
    istio-injection: disabled
    kubernetes.io/metadata.name: airbyte
    owner: data
  name: airbyte
  resourceVersion: "9947279"
  uid: 5bc8e393-de77-4e3d-9eb5-96dedf8fceb1
spec:
  finalizers:
  - kubernetes
status:
  phase: Active

带有端口的网络策略不允许从源 ns 连接到目标 ns 中端口 80 上的服务(但一旦删除

ports
部分即可工作):

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-traffic-to-airbyte
  namespace: airbyte
spec:
  podSelector: {}
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          access-to-airbyte: allow
    ports:
    - protocol: TCP
      port: 80
  - from:
    - podSelector: {}

我要测试的服务:

airbyte-airbyte-api-server-svc ClusterIP 10.2.10.146 80/TCP 26d app.kubernetes.io/instance=airbyte,app.kubernetes.io/name=airbyte-api-server

kubernetes azure-aks kubernetes-networkpolicy
1个回答
0
投票

您的初始网络策略有效,因为它允许来自带有标签

access-to-destination-ns: allow
的命名空间中的任何 pod 以及来自
destination-ns
本身内的任何 pod 的所有入口流量,对端口没有任何限制。当您添加
ports
部分以限制 TCP 端口 80 的流量时,您需要确保
ingress
规则的结构仅将此端口限制正确应用于预期源。您的非工作示例中的问题可能是
ports
部分中
ingress
规范的位置和范围。要解决此问题,
ports
规范应包含在要应用的每个
from
块中。

配置网络策略以允许端口 80 上的流量从

source-ns
destination-ns
的示例设置。

创建命名空间

source-ns
destination-ns

kubectl create namespace source-ns
kubectl create namespace destination-ns

enter image description here

标签

source-ns
以便网络策略可以识别它

kubectl label namespaces source-ns access-to-destination-ns=allow

在destination-ns中部署示例应用程序和服务

kubectl run http-server --image=nginx --namespace=destination-ns

创建一个服务来公开 HTTP 服务器(我的示例应用程序)

kubectl expose pod http-server --port=80 --target-port=80 --namespace=destination-ns

应用网络策略以允许端口 80 上从

source-ns
destination-ns

中的所有 Pod 的流量
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-80
  namespace: destination-ns
spec:
  podSelector: {}
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          access-to-destination-ns: allow
    ports:
    - protocol: TCP
      port: 80

您可以使用相同的方法

kubectl apply -f allow-port-80.yaml

enter image description here

测试从

source-ns
中的 Pod 到
destination-ns

中的服务的连接
kubectl get pods -n source-ns
kubectl exec -it test-pod -n source-ns -- nc -vz http-server.destination-ns 80

enter image description here enter image description here

输出

http-server.destination-ns (10.0.219.12:80) open
表示网络策略已正确配置并正常运行。您的
test-pod
中的
source-ns
可以成功连接到
http-server
中的
destination-ns
服务的端口 80

如果您的配置旨在允许:

  1. 端口 80 上的流量从

    source-ns
    destination-ns
    中的所有 pod,使用
    namespaceSelector
    过滤来自标记为
    access-to-destination-ns: allow
    的命名空间的流量 - 您应该在同一
    ports
    子句中指定
    from
    数组,其中包括
    namespaceSelector

  2. destination-ns
    内的所有命名空间内流量没有任何端口限制 - 您应该有一个单独的
    from
    子句,指定带有空选择器集的
    podSelector
    ,这可以有效地选择命名空间内的所有 Pod。

在这种情况下,您的网络策略将是 -

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-80
  namespace: destination-ns
spec:
  podSelector: {}  # Applies to all pods in the namespace
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          access-to-destination-ns: allow
    ports:
    - protocol: TCP
      port: 80  # Restrict this rule to TCP port 80
  - from:
    - podSelector: {}  # Allows all traffic from pods within the same namespace

此处,第一个

ingress
条目允许来自任何具有标签
access-to-destination-ns: allow
的命名空间在 TCP 端口 80 上进行流量,从而有效地允许端口 80 上从
source-ns
destination-ns
的流量。 - 第二个
ingress
条目允许所有来自
destination-ns
内任何 Pod 的流量。此条目未指定
ports
部分,因此它允许所有端口上的流量。

enter image description here

参考资料:

© www.soinside.com 2019 - 2024. All rights reserved.