ceph集群安装在K8S admin密码不正确

问题描述 投票:0回答:2

我已经使用 Rook 在 K8S 中安装了 ceph 集群,服务运行良好,PV/PVC 也按预期工作。

我曾经能够登录到仪表板,但过了一会儿密码不正确。

我用命令显示密码还是不对

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

pod 没有明显的错误信息

k logs -n rook-ceph rook-ceph-mgr-a-547f75956-c5f9t

debug 2022-02-05T00:09:14.144+0000 ffff58661400  0 log_channel(cluster) log [DBG] : pgmap v367973: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 767 B/s rd, 1 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:16.144+0000 ffff58661400  0 log_channel(cluster) log [DBG] : pgmap v367974: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 1.2 KiB/s rd, 2 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:16.784+0000 ffff53657400  0 [progress INFO root] Processing OSDMap change 83..83
debug 2022-02-05T00:09:17.684+0000 ffff44bba400  0 [volumes INFO mgr_util] scanning for idle connections..
debug 2022-02-05T00:09:17.684+0000 ffff44bba400  0 [volumes INFO mgr_util] cleaning up connections: []
debug 2022-02-05T00:09:17.860+0000 ffff3da6c400  0 [volumes INFO mgr_util] scanning for idle connections..
debug 2022-02-05T00:09:17.860+0000 ffff3da6c400  0 [volumes INFO mgr_util] cleaning up connections: []
debug 2022-02-05T00:09:17.988+0000 ffff40b72400  0 [volumes INFO mgr_util] scanning for idle connections..
debug 2022-02-05T00:09:17.988+0000 ffff40b72400  0 [volumes INFO mgr_util] cleaning up connections: []
debug 2022-02-05T00:09:18.148+0000 ffff58661400  0 log_channel(cluster) log [DBG] : pgmap v367975: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 767 B/s rd, 1 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:20.148+0000 ffff58661400  0 log_channel(cluster) log [DBG] : pgmap v367976: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 1.2 KiB/s rd, 2 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:21.788+0000 ffff53657400  0 [progress INFO root] Processing OSDMap change 83..83
debug 2022-02-05T00:09:22.144+0000 ffff58661400  0 log_channel(cluster) log [DBG] : pgmap v367977: 96 pgs: 45 active+undersized+degraded, 51 active+undersized; 649 MiB data, 2.8 GiB used, 197 GiB / 200 GiB avail; 853 B/s rd, 1 op/s; 239/717 objects degraded (33.333%)
debug 2022-02-05T00:09:23.188+0000 ffff5765f400  0 [balancer INFO root] Optimize plan auto_2022-02-05_00:09:23
debug 2022-02-05T00:09:23.188+0000 ffff5765f400  0 [balancer INFO root] Mode upmap, max misplaced 0.050000
debug 2022-02-05T00:09:23.188+0000 ffff5765f400  0 [balancer INFO root] Some objects (0.333333) are degraded; try again later
ubuntu@:~$ 

命名空间中没有事件

ubuntu@df1:~$ k get events -n rook-ceph
No resources found in rook-ceph namespace.

好像可以用 cephadm 命令重置密码,但是如何以 root 用户登录 pod?

ceph dashboard ac-user-set-password USERNAME PASSWORD

非root用户无法执行此cephadm命令:

ubuntu@:~$ k exec -it rook-ceph-tools-7884798859-7vcnz -n rook-ceph -- bash
[rook@rook-ceph-tools-7884798859-7vcnz /]$ cephadm
ERROR: cephadm should be run as root
[rook@rook-ceph-tools-7884798859-7vcnz /]$ 
root ceph
2个回答
0
投票

我刚刚运行了“ceph dashboard ac-user-set-password admin -i 'File with password'”,我的密码被更改了。 我认为 cephadm 在执行 pod 时不起作用。


0
投票

就我而言,添加新用户解决了问题。

$ kubectl -n rook-ceph exec -itrook-ceph-tools -- bash

$ ceph dashboard ac-user-show
[] # In my case, there were no users to begin with.

# Use redirection with inos that do not have permission to create files.
# e.g. ...-create admin <(echo 'admin_password')
$ ceph dashboard ac-user-create <username> -i <(echo '<password>')

$ ceph dashboard ac-role-show
["administrator", "read-only", "block-manager", "rgw-manager", "cluster-manager", "pool-manager", "cephfs-manager", "ganesha-manager"]

$ ceph dashboard ac-user-add-roles <username> administrator
© www.soinside.com 2019 - 2024. All rights reserved.