CircleCI 消息“错误:执行插件:无效的 apiVersion”client.authentication.k8s.io/v1alpha1”

问题描述 投票:0回答:23

我在 CircleCI 中部署部署时遇到错误。请找到下面的配置文件。

运行 kubectl CLI 时,我们在 kubectl 和 aws-cli

EKS
工具之间出现错误。

version: 2.1
orbs:
  aws-ecr: circleci/[email protected]
  docker: circleci/[email protected]
  rollbar: rollbar/[email protected]
  kubernetes: circleci/[email protected]
  deploy:
    version: 2.1
    orbs:
      aws-eks: circleci/[email protected]
      kubernetes: circleci/[email protected]
    executors:
      default:
        description: |
          The version of the circleci/buildpack-deps Docker container to use
          when running commands.
        parameters:
          buildpack-tag:
            type: string
            default: buster
        docker:
          - image: circleci/buildpack-deps:<<parameters.buildpack-tag>>
    description: |
      A collection of tools to deploy changes to AWS EKS in a declarative
      manner where all changes to templates are checked into version control
      before applying them to an EKS cluster.
    commands:
      setup:
        description: |
          Install the gettext-base package into the executor to be able to run
          envsubst for replacing values in template files.
          This command is a prerequisite for all other commands and should not
          have to be run manually.
        parameters:
          cluster-name:
            default: ''
            description: Name of the EKS Cluster.
            type: string
          aws-region:
            default: 'eu-central-1'
            description: Region where the EKS Cluster is located.
            type: string
          git-user-email:
            default: "[email protected]"
            description: Email of the git user to use for making commits
            type: string
          git-user-name:
            default: "CircleCI Deploy Orb"
            description:  Name of the git user to use for making commits
            type: string
        steps:
          - run:
              name: install gettext-base
              command: |
                if which envsubst > /dev/null; then
                  echo "envsubst is already installed"
                  exit 0
                fi
                sudo apt-get update
                sudo apt-get install -y gettext-base
          - run:
              name: Setup GitHub access
              command: |
                mkdir -p ~/.ssh
                echo 'github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==' >> ~/.ssh/known_hosts
                git config --global user.email "<< parameters.git-user-email >>"
                git config --global user.name "<< parameters.git-user-name >>"
          - aws-eks/update-kubeconfig-with-authenticator:
              aws-region: << parameters.aws-region >>
              cluster-name: << parameters.cluster-name >>
              install-kubectl: true
              authenticator-release-tag: v0.5.1
      update-image:
        description: |
          Generates template files with the specified version tag for the image
          to be updated and subsequently applies that template after checking it
          back into version control.
        parameters:
          cluster-name:
            default: ''
            description: Name of the EKS Cluster.
            type: string
          aws-region:
            default: 'eu-central-1'
            description: Region where the EKS Cluster is located.
            type: string
          image-tag:
            default: ''
            description: |
              The tag of the image, defaults to the  value of `CIRCLE_SHA1`
              if not provided.
            type: string
          replicas:
            default: 3
            description: |
              The replica count for the deployment.
            type: integer
          environment:
            default: 'production'
            description: |
              The environment/stage where the template will be applied. Defaults
              to `production`.
            type: string
          template-file-path:
            default: ''
            description: |
              The path to the source template which contains the placeholders
              for the image-tag.
            type: string
          resource-name:
            default: ''
            description: |
              Resource name in the format TYPE/NAME e.g. deployment/nginx.
            type: string
          template-repository:
            default: ''
            description: |
              The fullpath to the repository where templates reside. Write
              access is required to commit generated templates.
            type: string
          template-folder:
            default: 'templates'
            description: |
              The name of the folder where the template-repository is cloned to.
            type: string
          placeholder-name:
            default: IMAGE_TAG
            description: |
              The name of the placeholder environment variable that is to be
              substituted with the image-tag parameter.
            type: string
          cluster-namespace:
            default: sayway
            description: |
              Namespace within the EKS Cluster.
            type: string
        steps:
          - setup:
              aws-region: << parameters.aws-region >>
              cluster-name: << parameters.cluster-name >>
              git-user-email: [email protected]
              git-user-name: deploy
          - run:
              name: pull template repository
              command: |
                [ "$(ls -A << parameters.template-folder >>)" ] && \
                  cd << parameters.template-folder >> && git pull --force && cd ..
                [ "$(ls -A << parameters.template-folder >>)" ] || \
                  git clone << parameters.template-repository >> << parameters.template-folder >>
          - run:
              name: generate and commit template files
              command: |
                cd << parameters.template-folder >>
                IMAGE_TAG="<< parameters.image-tag >>"
                ./bin/generate.sh --file << parameters.template-file-path >> \
                  --stage << parameters.environment >> \
                  --commit-message "Update << parameters.template-file-path >> for << parameters.environment >> with tag ${IMAGE_TAG:-$CIRCLE_SHA1}" \
                  << parameters.placeholder-name >>="${IMAGE_TAG:-$CIRCLE_SHA1}" \
                  REPLICAS=<< parameters.replicas >>
          - kubernetes/create-or-update-resource:
              get-rollout-status: true
              namespace: << parameters.cluster-namespace >>
              resource-file-path: << parameters.template-folder >>/<< parameters.environment >>/<< parameters.template-file-path >>
              resource-name: << parameters.resource-name >>
jobs:
  test:
    working_directory: ~/say-way/core
    parallelism: 1
    shell: /bin/bash --login
    environment:
      CIRCLE_ARTIFACTS: /tmp/circleci-artifacts
      CIRCLE_TEST_REPORTS: /tmp/circleci-test-results
      KONFIG_CITUS__HOST: localhost
      KONFIG_CITUS__USER: postgres
      KONFIG_CITUS__DATABASE: sayway_test
      KONFIG_CITUS__PASSWORD: ""
      KONFIG_SPEC_REPORTER: true
    docker:
    - image: 567567013174.dkr.ecr.eu-central-1.amazonaws.com/core-ci:test-latest
      aws_auth:
        aws_access_key_id: $AWS_ACCESS_KEY_ID_STAGING
        aws_secret_access_key: $AWS_SECRET_ACCESS_KEY_STAGING
    - image: circleci/redis
    - image: rabbitmq:3.7.7
    - image: circleci/mongo:4.2
    - image: circleci/postgres:10.5-alpine
    steps:
    - checkout
    - run: mkdir -p $CIRCLE_ARTIFACTS $CIRCLE_TEST_REPORTS
    # This is based on your 1.0 configuration file or project settings
    - restore_cache:
        keys:
        - v1-dep-{{ checksum "Gemfile.lock" }}-
        # any recent Gemfile.lock
        - v1-dep-
    - run:
        name: install correct bundler version
        command: |
          export BUNDLER_VERSION="$(grep -A1 'BUNDLED WITH' Gemfile.lock | tail -n1 | tr -d ' ')"
          echo "export BUNDLER_VERSION=$BUNDLER_VERSION" >> $BASH_ENV
          gem install bundler --version $BUNDLER_VERSION
    - run: 'bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs=4 --retry=3'
    - run:
        name: copy test.yml.sample to test.yml
        command: cp config/test.yml.sample config/test.yml
    - run:
        name: Precompile and clean assets
        command: bundle exec rake assets:precompile assets:clean
    # Save dependency cache
    - save_cache:
        key: v1-dep-{{ checksum "Gemfile.lock" }}-{{ epoch }}
        paths:
        - vendor/bundle
        - public/assets
    - run:
        name: Audit bundle for known security vulnerabilities
        command: bundle exec bundle-audit check --update
    - run:
        name: Setup Database
        command: bundle exec ruby ~/sayway/setup_test_db.rb
    - run:
        name: Migrate Database
        command: bundle exec rake db:citus:migrate
    - run:
        name: Run tests
        command: bundle exec rails test -f
    # By default, running "rails test" won't run system tests.
    - run:
        name: Run system tests
        command: bundle exec rails test:system
    # Save test results
    - store_test_results:
        path: /tmp/circleci-test-results
    # Save artifacts
    - store_artifacts:
        path: /tmp/circleci-artifacts
    - store_artifacts:
        path: /tmp/circleci-test-results
  build-and-push-image:
    working_directory: ~/say-way/
    parallelism: 1
    shell: /bin/bash --login
    executor: aws-ecr/default
    steps:
      - checkout
      - run:
          name: Pull latest core images for cache
          command: |
            $(aws ecr get-login --no-include-email --region $AWS_REGION)
            docker pull "${AWS_ECR_ACCOUNT_URL}/core:latest"
      - docker/build:
          image: core
          registry: "${AWS_ECR_ACCOUNT_URL}"
          tag: "latest,${CIRCLE_SHA1}"
          cache_from: "${AWS_ECR_ACCOUNT_URL}/core:latest"
      - aws-ecr/push-image:
          repo: core
          tag: "latest,${CIRCLE_SHA1}"
  deploy-production:
    working_directory: ~/say-way/
    parallelism: 1
    shell: /bin/bash --login
    executor: deploy/default
    steps:
      - kubernetes/install-kubectl:
          kubectl-version: v1.22.0
      - rollbar/notify_deploy_started:
          environment: report
      - deploy/update-image:
          resource-name: deployment/core-web
          template-file-path: core-web-pod.yml
          cluster-name: report
          environment: report
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 3
      - deploy/update-image:
          resource-name: deployment/core-worker
          template-file-path: core-worker-pod.yml
          cluster-name: report
          environment: report
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 4
      - deploy/update-image:
          resource-name: deployment/core-worker-batch
          template-file-path: core-worker-batch-pod.yml
          cluster-name: report
          environment: report
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 1
      - rollbar/notify_deploy_finished:
          deploy_id: "${ROLLBAR_DEPLOY_ID}"
          status: succeeded
  deploy-demo:
    working_directory: ~/say-way/
    parallelism: 1
    shell: /bin/bash --login
    executor: deploy/default
    steps:
      - kubernetes/install-kubectl:
          kubectl-version: v1.22.0
      - rollbar/notify_deploy_started:
          environment: demo
      - deploy/update-image:
          resource-name: deployment/core-web
          template-file-path: core-web-pod.yml
          cluster-name: demo
          environment: demo
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 2
      - deploy/update-image:
          resource-name: deployment/core-worker
          template-file-path: core-worker-pod.yml
          cluster-name: demo
          environment: demo
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 1
      - deploy/update-image:
          resource-name: deployment/core-worker-batch
          template-file-path: core-worker-batch-pod.yml
          cluster-name: demo
          environment: demo
          template-repository: [email protected]:say-way/sw-k8s.git
          replicas: 1
      - rollbar/notify_deploy_finished:
          deploy_id: "${ROLLBAR_DEPLOY_ID}"
          status: succeeded
workflows:
  version: 2.1
  build-n-test:
    jobs:
      - test:
          filters:
            branches:
              ignore: master
  build-approve-deploy:
    jobs:
      - build-and-push-image:
          context: Core
          filters:
            branches:
              only: master
      - approve-report-deploy:
          type: approval
          requires:
            - build-and-push-image
      - approve-demo-deploy:
          type: approval
          requires:
            - build-and-push-image
      - deploy-production:
          context: Core
          requires:
            - approve-report-deploy
      - deploy-demo:
          context: Core
          requires:
            - approve-demo-deploy
amazon-web-services kubernetes aws-cli amazon-eks circleci
23个回答
120
投票

aws-cli 中存在问题。已经解决了。


在我的情况下,更新 aws-cli + 更新 ~/.kube/config 有帮助。

更新 aws-cli(
    遵循文档
  1. curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install --update
更新 kube 配置
  1. mv ~/.kube/config ~/.kube/config.bk aws eks update-kubeconfig --region ${AWS_REGION} --name ${EKS_CLUSTER_NAME}


44
投票
https://github.com/aws/aws-cli/issues/6920#issuecomment-1119926885

将 aws-cli (aws cli v1) 更新到已修复的版本:

pip3 install awscli --upgrade --user

对于 aws cli v2,请参阅 
this

之后,不要忘记用以下内容重写 kube-config:
aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}

此命令应将 kube 
apiVersion

更新为

v1beta1
    


27
投票
apiVersion

更改为 v1beta1 会有所帮助: apiVersion: client.authentication.k8s.io/v1beta1



5
投票

    curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
  1. chmod +x ./kubectl
  2. sudo mv ./kubectl /usr/local/bin/kubectl
  3. sudo kubectl version
  4. 
        

5
投票
最简单的解决方案

:(它出现在这里,但用复杂的文字..) 打开您的

kube config

文件并将所有 alpha 实例替换为 beta。 (推荐具有查找和替换功能的编辑器:Atom、Sublime 等。)。 Nano 示例:

nano ~/.kube/config

或使用 Atom:

atom ~/.kube/config

然后您应该搜索 
alpha

实例并将其替换为 beta 并保存文件。


3
投票
https://github.com/aws/aws-cli/issues/6920


3
投票

其余说明来自bigLucas提供的

答案

将 aws-cli (aws cli v2) 更新到最新版本:

winget install Amazon.AWSCLI

之后,不要忘记用以下内容重写 kube-config:

aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}

此命令应将 kube apiVersion 更新为 v1beta1。


3
投票


3
投票

    备份现有配置文件
  1. mv ~/.kube/config ~/.kube/config.bk

    
    

  2. 运行以下命令:
  3. aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
然后在任何文本编辑器中打开配置 
    ~/.kube/config
  1. 文件,将
    v1apiVersion1
    更新为
    v1beta1
    ,然后重试。
    
        

2
投票

asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git asdf install kubectl 1.21.9

我建议有一个 
.tools-versions

文件:

kubectl 1.21.9



2
投票
打开
    ~/.kube/config
  1. 在有问题的集群中搜索 
  2. user
  3. 并将
    client.authentication.k8s.io/v1alpha1
    替换为
    client.authentication.k8s.io/v1beta1
    
        

1
投票
AWS 命令行界面

步骤

    curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
  1. sudo installer -pkg ./AWSCLIV2.pkg -target
  2. 
    
  3. 您可以使用 AWS 文档中的其他方法:

安装或更新最新版本的 AWS CLI


1
投票
awscli

AWS 命令行界面
)版本。 对于 Mac,它是

brew upgrade awscli

(

Homebrew
)。


1
投票
helm ls -n $namespace

时出现错误

Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

来自
here

:这是helm版本问题。 所以我使用命令

curl -L https://git.io/get_helm.sh | bash -s -- --version v3.8.2

降级了 helm 版本。掌舵工作


0
投票


0
投票

然后按照bigLucas

建议运行

aws eks update-kubeconfig --name

命令。

我能够通过在

0
投票
芯片(

Homebrew)上运行来修复此问题: brew upgrade awscli

我只是通过将 awscli 更新到 awscli-v2 来简化解决方法,但这还需要升级 Python 和 pip。它至少需要 Python 3.6 和 pip3。

0
投票
apt install python3-pip -y && pip3 install awscli --upgrade --user

然后使用 awscli 更新集群配置
aws eks update-kubeconfig --region <regionname> --name <ClusterName>

输出

Added new context arn:aws:eks:us-east-1:XXXXXXXXXXX:cluster/mycluster to /home/dev/.kube/config

然后检查与集群的连接性
dev@ip-10-100-100-6:~$ kubectl get node
NAME                             STATUS   ROLES    AGE    VERSION
ip-X-XX-XX-XXX.ec2.internal   Ready    <none>   148m   v1.21.5-eks-9017834

您可以在存在 kubectl 和 aws-cli 的主机上运行以下命令:

0
投票

export KUBERNETES_EXEC_INFO='{"apiVersion":"client.authentication.k8s.io/v1beta1"}'

如果在运行 kubectl 命令时使用“sudo”,则将其导出为 root 用户。

apt install python3-pip -y pip3 install awscli --upgrade --user

0
投票
尝试不同版本的 kubectl ,
如果 kubernetes 版本是 1.23 那么我们可以使用(接近)kubectl 版本 1.23,1.24,1.22

0
投票

对我来说,将 aws-iam-authenticator 从 v0.5.5 升级到 v0.5.9 解决了这个问题

0
投票

只需使用这个即可:-

0
投票
v1alpha1 to v1beta1

在 kube/config 上更新此
    

© www.soinside.com 2019 - 2024. All rights reserved.