我在这里和在线博客上阅读了许多类似的问题,我尝试了许多配置更改,但似乎没有任何效果。我正在使用 ECK 来管理 IBM 云 IKS(经典)上的弹性和 kibana 堆栈。
我想使用 App ID 作为 oauth2 提供者,并通过运行 nginx 的入口进行身份验证。我的那部分部分工作,我获得了 SSO 登录并必须成功进行身份验证,但我没有被重定向到 kibana 应用程序登录页面,而是被重定向到 kibana 登录页面。我使用 helm 来管理 Elastic、Kibana 和 Ingress 资源。我将模板化资源并将 yaml 清单与一些虚拟值放在此处。
helm template --name-template=es-kibana-ingress es-k-stack -s templates/kibana.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME > kibana_template.yaml
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: es-kibana-ingress-es-k-stack
spec:
config:
server.rewriteBasePath: true
server.basePath: /kibana-es-kibana-ingress
server.publicBaseUrl: https://CLUSTER.REGION.containers.appdomain.cloud/kibana-es-kibana-ingress
version: 7.16.3
count: 1
elasticsearchRef:
name: es-kibana-ingress-es-k-stack
podTemplate:
spec:
containers:
- name: kibana
readinessProbe:
httpGet:
scheme: HTTPS
path: /kibana-es-kibana-ingress
port: 5601
helm template --name-template=es-kibana-ingress es-k-stack -s templates/ingress.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME > kibana_ingress_template.yaml
kind: Ingress
metadata:
name: es-kibana-ingress
namespace: es-kibana-ingress
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2-APPID_INSTANCE_NAME/start?rd=$escaped_request_uri
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2-APPID_INSTANCE_NAME/auth
nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $name_upstream_1 $upstream_cookie__oauth2_APPID_INSTANCE_NAME_1;
auth_request_set $access_token $upstream_http_x_auth_request_access_token;
auth_request_set $id_token $upstream_http_authorization;
access_by_lua_block {
if ngx.var.name_upstream_1 ~= "" then
ngx.header["Set-Cookie"] = "_oauth2_APPID_INSTANCE_NAME_1=" .. ngx.var.name_upstream_1 .. ngx.var.auth_cookie:match("(; .*)")
end
if ngx.var.id_token ~= "" and ngx.var.access_token ~= "" then
ngx.req.set_header("Authorization", "Bearer " .. ngx.var.access_token .. " " .. ngx.var.id_token:match("%s*Bearer%s*(.*)"))
end
}
nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- CLUSTER.REGION.containers.appdomain.cloud
secretName: CLUSTER_SECRET
rules:
- host: CLUSTER.REGION.containers.appdomain.cloud
http:
paths:
- backend:
service:
name: es-kibana-ingress-xdr-datalake-kb-http
port:
number: 5601
path: /kibana-es-kibana-ingress
pathType: ImplementationSpecific
helm template --name-template=es-kibana-ingress ~/Git/xdr_datalake/helm/xdr-es-k-stack/ -s templates/elasticsearch.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME > elastic_template.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: es-kibana-ingress-es-k-stack
spec:
version: 7.16.3
nodeSets:
- name: master
count: 1
config:
node.store.allow_mmap: true
node.roles: ["master"]
xpack.ml.enabled: true
reindex.remote.whitelist: [CLUSTER.REGION.containers.appdomain.cloud:443]
indices.query.bool.max_clause_count: 3000
xpack:
license.self_generated.type: basic
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: ibmc-file-retain-gold-custom-terraform
podTemplate:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
topologyKey: kubernetes.io/zone
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
volumes:
- name: elasticsearch-data
emptyDir: {}
containers:
- name: elasticsearch
resources:
limits:
cpu: 4
memory: 6Gi
requests:
cpu: 2
memory: 3Gi
env:
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NETWORK_HOST
value: _site_
- name: MAX_LOCAL_STORAGE_NODES
value: "1"
- name: DISCOVERY_SERVICE
value: elasticsearch-discovery
- name: HTTP_CORS_ALLOW_ORIGIN
value: '*'
- name: HTTP_CORS_ENABLE
value: "true"
- name: data
count: 1
config:
node.roles: ["data", "ingest", "ml", "transform"]
reindex.remote.whitelist: [CLUSTER.REGION.containers.appdomain.cloud:443]
indices.query.bool.max_clause_count: 3000
xpack:
license.self_generated.type: basic
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: ibmc-file-retain-gold-custom-terraform
podTemplate:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
elasticsearch.k8s.elastic.co/cluster-name: es-kibana-ingress-es-k-stack
topologyKey: kubernetes.io/zone
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
volumes:
- name: elasticsearch-data
emptyDir: {}
containers:
- name: elasticsearch
resources:
limits:
cpu: 4
memory: 6Gi
requests:
cpu: 2
memory: 3Gi
env:
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NETWORK_HOST
value: _site_
- name: MAX_LOCAL_STORAGE_NODES
value: "1"
- name: DISCOVERY_SERVICE
value: elasticsearch-discovery
- name: HTTP_CORS_ALLOW_ORIGIN
value: '*'
- name: HTTP_CORS_ENABLE
value: "true"
任何指点将不胜感激。我确信我错过了一些小东西,但我在网上找不到它 - 我想我错过了一些令牌或授权标头重写,但我无法弄清楚。
所以这归结为我的误解。在以前的自我管理 ELK 堆栈上,上述方法有效,区别在于 ECK 上的安全性默认启用。因此,当您将 nginx 反向代理设置为正确提供 SAML 集成(如上所述)时,您仍然可以获得 kibana 登录页面。
为了规避此问题,我设置了一个文件域用于身份验证,并为 kibana 管理员用户提供了用户名/密码:
helm template --name-template=es-kibana-ingress xdr-es-k-stack -s templates/crd_kibana.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME --set kibana.kibanaUser="kibanaUSER" --set kibana.kibanaPass="kibanaPASS"
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: es-kibana-ingress-xdr-datalake
namespace: default
spec:
config:
server.rewriteBasePath: true
server.basePath: /kibana-es-kibana-ingress
server.publicBaseUrl: https://CLUSTER.REGION.containers.appdomain.cloud/kibana-es-kibana-ingress
server.host: "0.0.0.0"
server.name: kibana
xpack.security.authc.providers:
anonymous.anonymous1:
order: 0
credentials:
username: kibanaUSER
password: kibanaPASS
version: 7.16.3
http:
tls:
selfSignedCertificate:
disabled: true
count: 1
elasticsearchRef:
name: es-kibana-ingress-xdr-datalake
podTemplate:
spec:
containers:
- name: kibana
readinessProbe:
timeoutSeconds: 30
httpGet:
scheme: HTTP
path: /kibana-es-kibana-ingress/app/dev_tools
port: 5601
resources:
limits:
cpu: 3
memory: 1Gi
requests:
cpu: 3
memory: 1Gi
helm template --name-template=es-kibana-ingress xdr-es-k-stack -s templates/crd_elasticsearch.yaml --set ingress.enabled=true --set ingress.host="CLUSTER.REGION.containers.appdomain.cloud" --set ingress.secretName="CLUSTER_SECRET" --set app_id.enabled=true --set app_id.instanceName=APPID_INSTANCE_NAME --set kibana.kibanaUser="kibanaUSER" --set kibana.kibanaPass="kibanaPASS"
您可能注意到我删除了自签名证书 - 这是由于将 kafka 连接到集群上的 ES 时出现问题。我们决定使用 ISTIO 来提供内部网络连接 - 但如果您没有这个问题,您可以保留它们。我还必须稍微更新一下入口才能使用这个新的 HTTP 后端(以前是 HTTPS):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: es-kibana-ingress-kibana
namespace: default
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2-APPID_INSTANCE_NAME/start?rd=$escaped_request_uri
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2-APPID_INSTANCE_NAME/auth
nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $name_upstream_1 $upstream_cookie__oauth2_APPID_INSTANCE_NAME_1;
auth_request_set $access_token $upstream_http_x_auth_request_access_token;
auth_request_set $id_token $upstream_http_authorization;
access_by_lua_block {
if ngx.var.name_upstream_1 ~= "" then
ngx.header["Set-Cookie"] = "_oauth2_APPID_INSTANCE_NAME_1=" .. ngx.var.name_upstream_1 .. ngx.var.auth_cookie:match("(; .*)")
end
if ngx.var.id_token ~= "" and ngx.var.access_token ~= "" then
ngx.req.set_header("Authorization", "Bearer " .. ngx.var.access_token .. " " .. ngx.var.id_token:match("%s*Bearer%s*(.*)"))
end
}
nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- CLUSTER.REGION.containers.appdomain.cloud
secretName: CLUSTER_SECRET
rules:
- host: CLUSTER.REGION.containers.appdomain.cloud
http:
paths:
- backend:
service:
name: es-kibana-ingress-xdr-datalake-kb-http
port:
number: 5601
path: /kibana-es-kibana-ingress
pathType: ImplementationSpecific
希望这对其他人有帮助。
“希望这对其他人有帮助。”
确实如此。 谢谢你的痛苦