在2节点裸机群集上设置Rancher Hello World

问题描述 投票:0回答:1

我正在尝试使用rancher在2个运行centos 7的裸机服务器上建立一个K8s集群。我使用rancher UI创建了集群,然后添加了2个节点: - 具有etcd,controlplane和worker角色的服务器1 - 服务器2具有控制平面和工作者角色

一切都设置好了。然后,我尝试使用rancher教程部署rancher / hello-world映像并在端口80中配置ingress。

如果pod在服务器1上运行,我可以使用server1.xio ip地址轻松访问。因为服务器1 ip是群集的入口。当它在pod 2上运行时,它会显示nginx的504 Gateway错误。

打开所有端口后我已经禁用了firewalld。

我注意到2个kubernetes服务记录了一些错误:

绒布:

E0429 14:20:13.625489 1 route_network.go:114] Error adding route to 10.42.0.0/24 via 192.168.169.46 dev index 2: network is unreachable
I0429 14:20:13.626679 1 iptables.go:115] Some iptables rules are missing; deleting and recreating rules
I0429 14:20:13.626689 1 iptables.go:137] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0429 14:20:13.626934 1 iptables.go:115] Some iptables rules are missing; deleting and recreating rules
I0429 14:20:13.626943 1 iptables.go:137] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.627279 1 iptables.go:137] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I0429 14:20:13.627568 1 iptables.go:137] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.627849 1 iptables.go:137] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.4.0/24 -j RETURN
I0429 14:20:13.628111 1 iptables.go:125] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.628551 1 iptables.go:137] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE
I0429 14:20:13.629139 1 iptables.go:125] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0429 14:20:13.629356 1 iptables.go:125] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.630313 1 iptables.go:125] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I0429 14:20:13.631531 1 iptables.go:125] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.4.0/24 -j RETURN
I0429 14:20:13.632717 1 iptables.go:125] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE```

cattle agent thrown:

Timout连接代理“url =”wss://ljanalyticsdev01.lojackhq.com.ar:16443/v3/connect“```

但是当节点假设控制平面角色时,这已得到修复。

Hello World pod YAML:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    field.cattle.io/creatorId: user-qlsc5
    field.cattle.io/publicEndpoints: '[{"addresses":["192.168.169.46"],"port":80,"protocol":"HTTP","serviceName":"default:ingress-d1e1a394f61c108633c4bd37aedde757","ingressName":"default:hello","hostname":"hello.default.192.168.169.46.xip.io","allNodes":true}]'
  creationTimestamp: "2019-04-29T03:55:16Z"
  generation: 6
  labels:
    cattle.io/creator: norman
    workload.user.cattle.io/workloadselector: deployment-default-hello
  name: hello
  namespace: default
  resourceVersion: "303493"
  selfLink: /apis/apps/v1beta2/namespaces/default/deployments/hello
  uid: 992bf62e-6a32-11e9-92ae-005056998e1d
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      workload.user.cattle.io/workloadselector: deployment-default-hello
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      annotations:
        cattle.io/timestamp: "2019-04-29T03:54:58Z"
      creationTimestamp: null
      labels:
        workload.user.cattle.io/workloadselector: deployment-default-hello
    spec:
      containers:
      - image: rancher/hello-world
        imagePullPolicy: Always
        name: hello
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities: {}
          privileged: false
          procMount: Default
          readOnlyRootFilesystem: false
          runAsNonRoot: false
        stdin: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        tty: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-04-29T03:55:16Z"
    lastUpdateTime: "2019-04-29T03:55:36Z"
    message: ReplicaSet "hello-6cc7bc6644" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2019-04-29T13:22:35Z"
    lastUpdateTime: "2019-04-29T13:22:35Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 6
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

负载均衡器和入口YAML:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    field.cattle.io/creatorId: user-qlsc5
    field.cattle.io/ingressState: '{"aGVsbG8vZGVmYXVsdC94aXAuaW8vLzgw":"deployment:default:hello"}'
    field.cattle.io/publicEndpoints: '[{"addresses":["192.168.169.46"],"port":80,"protocol":"HTTP","serviceName":"default:ingress-d1e1a394f61c108633c4bd37aedde757","ingressName":"default:hello","hostname":"hello.default.192.168.169.46.xip.io","allNodes":true}]'
  creationTimestamp: "2019-04-27T03:51:08Z"
  generation: 2
  labels:
    cattle.io/creator: norman
  name: hello
  namespace: default
  resourceVersion: "303476"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/hello
  uid: b082e994-689f-11e9-92ae-005056998e1d
spec:
  rules:
  - host: hello.default.192.168.169.46.xip.io
    http:
      paths:
      - backend:
          serviceName: ingress-d1e1a394f61c108633c4bd37aedde757
          servicePort: 80
status:
  loadBalancer:
    ingress:
    - ip: 192.168.169.46
    - ip: 192.168.186.211
kubernetes rancher bare-metal-server
1个回答
1
投票

你的入口控制器是在另一个节点上运行的?我可能会在两个节点上重新启动docker服务,看看是否刷新任何旧路由

© www.soinside.com 2019 - 2024. All rights reserved.