所以我已经在 GKE 中部署了这个项目,并且我正在尝试从 github 操作制作 CI/CD。所以我添加了工作流程文件,其中包含
name: Build and Deploy to GKE
on:
push:
branches:
- main
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
GKE_CLUSTER: ${{ secrets.GKE_CLUSTER }} # Add your cluster name here.
GKE_ZONE: ${{ secrets.GKE_ZONE }} # Add your cluster zone here.
DEPLOYMENT_NAME: ems-app # Add your deployment name here.
IMAGE: ciputra-ems-backend
jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout
uses: actions/checkout@v2
# Setup gcloud CLI
- uses: google-github-actions/setup-gcloud@94337306dda8180d967a56932ceb4ddcf01edae7
with:
service_account_key: ${{ secrets.GKE_SA_KEY }}
project_id: ${{ secrets.GKE_PROJECT }}
# Configure Docker to use the gcloud command-line tool as a credential
# helper for authentication
- run: |-
gcloud --quiet auth configure-docker
# Get the GKE credentials so we can deploy to the cluster
- uses: google-github-actions/get-gke-credentials@fb08709ba27618c31c09e014e1d8364b02e5042e
with:
cluster_name: ${{ env.GKE_CLUSTER }}
location: ${{ env.GKE_ZONE }}
credentials: ${{ secrets.GKE_SA_KEY }}
# Build the Docker image
- name: Build
run: |-
docker build \
--tag "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA" \
--build-arg GITHUB_SHA="$GITHUB_SHA" \
--build-arg GITHUB_REF="$GITHUB_REF" \
.
# Push the Docker image to Google Container Registry
- name: Publish
run: |-
docker push "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA"
# Set up kustomize
- name: Set up Kustomize
run: |-
curl -sfLo kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64
chmod u+x ./kustomize
# Deploy the Docker image to the GKE cluster
- name: Deploy
run: |-
./kustomize edit set image LOCATION-docker.pkg.dev/PROJECT_ID/REPOSITORY/IMAGE:TAG=$GAR_LOCATION-docker.pkg.dev/$PROJECT_ID/$REPOSITORY/$IMAGE:$GITHUB_SHA
./kustomize build . | kubectl apply -k ./
kubectl rollout status deployment/$DEPLOYMENT_NAME
kubectl get services -o wide
但是当工作流程到达部署部分时,它显示错误
The Service "ems-app-service" is invalid: metadata.resourceVersion: Invalid value: "": must be specified for an update
现在我已经搜索到这实际上是不正确的,因为资源版本应该随着每次更新而改变,所以我只是删除了它
这是我的 kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
我的部署.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
app: ems-app
name: ems-app
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: ems-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: ems-app
spec:
containers:
- image: gcr.io/ciputra-nusantara/ems@sha256:70c34c5122039cb7fa877fa440fc4f98b4f037e06c2e0b4be549c4c992bcc86c
imagePullPolicy: IfNotPresent
name: ems-sha256-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
和我的 service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: ems-app
name: ems-app-service
namespace: default
spec:
clusterIP: 10.88.10.114
clusterIPs:
- 10.88.10.114
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30261
port: 80
protocol: TCP
targetPort: 80
selector:
app: ems-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 34.143.255.159
由于这个问题的标题更多地与 Kubernetes 相关,而不是与 GCP 相关,所以我会回答,因为我在使用 AWS EKS 时遇到了同样的问题。
How to fix metadata.resourceVersion: Invalid value: 0x0: must be specified for an update
是使用kubectl apply
时可能出现的错误
Kubectl apply
在本地文件、实时 kubernetes 对象清单和该实时对象清单中的注释kubectl.kubernetes.io/last-applied-configuration
之间进行三向合并。
因此,由于某种原因,值
resourceVersion
设法写入您的 last-applied-configuration
中,可能是因为有人将实时清单导出到文件,修改它,然后再次应用回来。
当您尝试应用不具有该值(并且不应该具有该值)的新本地文件时,但该值存在于
last-applied-configuration
中,它认为应该将其从您的实时清单中删除并专门发送它在随后的 patch
操作中,如 resourceVersion: null
,应该将其删除。但它不起作用,本地文件违反了规则(据我所知)并变得无效。
正如feichashao提到的,解决方法是删除
last-applied-configuration
注释并重新应用本地文件。
一旦你解决了,你
kubectl apply
输出将是这样的:
Warning: resource <your_resource> is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
您的实时清单将会更新。
如果有人仍然遇到这个问题,如果您仍然想使用 GKE,我可能无法提供帮助,但您可以尝试 @ChandraKiranPasumarti 的答案。就我个人而言,我的前辈只要求我将我们的应用程序容器化,因此我使用 Google Cloud Run 来代替,以便更轻松地部署和 CI/CD。 您可以使用此文件在 Cloud Run 中使用 CI/CD
https://github.com/google-github-actions/setup-gcloud/blob/main/example-workflows/cloud-run/cloud-run.yml
只需确保您已在存储库中添加了来自服务帐户 json 的秘密,然后在 yml 文件中选择用于身份验证的凭证 json
从this github问题我找到了一个更好的解决方案来解决我的问题:
通过获取现有资源版本并将其添加到更新对象中然后再应用来解决。
getObj, err := dr.Get(obj.GetName(), metav1.GetOptions{})
if errors.IsNotFound(err) {
// This doesnt ever happen even if it is already deleted or not found
log.Printf("%v not found", obj.GetName())
return nil, nil
}
if err != nil {
return nil, err
}
obj.SetResourceVersion(getObj.GetResourceVersion())
response, err := dr.Update(obj, metav1.UpdateOptions{})
if err != nil {
return nil, err
}