Ceph radosgw 因数值结果超出范围错误导致初始化失败

问题描述 投票:0回答:0

我在 Ceph 集群中尝试初始化 radosgw 时遇到问题。初始化失败,错误代码为 34,表示“数值结果超出范围”问题。我怀疑这可能与池或归置组设置中的错误配置有关,但我将不胜感激这里专家的任何指导。

这里是完整的日志输出:

deferred set uid:gid to 64045:64045 (ceph:ceph)
ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable), process radosgw, pid 927813
framework: civetweb
framework conf key: port, val: 8080
framework conf key: num_threads, val: 100
radosgw_Main not setting numa affinity
rgw_d3n: rgw_d3n_l1_local_datacache_enabled=0
D3N datacache enabled: 0
rgw main: rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num

rgw main: failed reading realm info: ret -34 (34) Numerical result out of range
rgw main: ERROR: failed to start notify service ((34) Numerical result out of range
rgw main: ERROR: failed to init services (ret=(34) Numerical result out of range)
Couldn't init storage provider (RADOS)


root@node01:~# ceph daemon /var/run/ceph/ceph-mon.node01.asok config show|grep -Ei 'mon_max_pg_per_osd|osd_pool_default_pg_num|osd_pool_default_pgp_num'
    "mon_max_pg_per_osd": "100000",
    "osd_pool_default_pg_num": "16",
    "osd_pool_default_pgp_num": "16",


root@node01:~# cat /etc/ceph/ceph.conf 
[global]
fsid = de49c7d8-5c5b-483c-9bdd-8f214dc4343e
mon host = node01
public network = 11.0.0.0/24
cluster network = 11.0.0.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 3                        
osd pool default min size = 2                  
osd pool default pg num = 16               
osd pool default pgp num = 16
osd crush chooseleaf type = 1
mon_max_pg_per_osd = 100000

[client.rgw.node01]
host = node01
keyring = /var/lib/ceph/radosgw/ceph-rgw.node01/keyring
log file = /var/log/ceph/ceph-rgw-node01.log
rgw frontends = civetweb port=8080 num_threads=100
rgw_zone = node01-zone

我已经检查了我的池的 pg_num 和 pgp_num 值,并确保 pg_num 大于或等于 pgp_num。我也在等待集群在进行任何更改后重新平衡。

尽管做出了这些努力,我仍然遇到同样的问题。我已经使用 ceph status 和 ceph health detail 检查了集群的状态和健康状况,但我没有发现任何明显的问题。

对于如何解决此问题的任何见解或建议,我将不胜感激。预先感谢您的帮助!

致以诚挚的问候

initialization ceph radosgw
© www.soinside.com 2019 - 2024. All rights reserved.