Ceph发光文件存储到Bluestore的迁移卡住了

问题描述 投票:1回答:1

我正在将我的小型Ceph集群(CentOS 7)上的Filestore后端迁移到Bluestore(在Luminous上)。我正在按照说明(http://docs.ceph.com/docs/master/rados/operations/bluestore-migration/)。

在过去的一个小时内,PG的替换卡在了33%的位置,并且没有发芽。日志如下:

master@ceph/ ceph osd out 2
osd.2 is already out. 
master@ceph/ ceph -w
  cluster:
    id:     e6104db4-284f-4a13-8128-570e3427a9f9
    health: HEALTH_WARN
            147718/443154 objects misplaced (33.333%)

  services:
    mon: 3 daemons, quorum node2,node3,master
    mgr: master(active), standbys: node3, node2
    mds: cephfs-1/1/1 up  {0=master=up:active}
    osd: 5 osds: 5 up, 4 in; 264 remapped pgs

  data:
    pools:   3 pools, 264 pgs
    objects: 144k objects, 8245 MB
    usage:   51972 MB used, 836 GB / 886 GB avail
    pgs:     147718/443154 objects misplaced (33.333%)
             264 active+clean+remapped

  io:
    client:   1695 B/s wr, 0 op/s rd, 0 op/s wr


2018-06-07 05:04:18.210576 mon.node2 [WRN] Health check update: 147717/443151 objects misplaced (33.333%) (OBJECT_MISPLACED)
2018-06-07 05:04:44.258809 mon.node2 [WRN] Health check update: 147718/443154 objects misplaced (33.333%) (OBJECT_MISPLACED)
2018-06-07 05:05:18.438887 mon.node2 [WRN] Health check update: 147717/443151 objects misplaced (33.333%) (OBJECT_MISPLACED)
2018-06-07 05:05:44.571445 mon.node2 [WRN] Health check update: 147718/443154 objects misplaced (33.333%) (OBJECT_MISPLACED)
2018-06-07 05:06:18.754717 mon.node2 [WRN] Health check update: 147717/443151 objects misplaced (33.333%) (OBJECT_MISPLACED)
2018-06-07 05:06:44.887698 mon.node2 [WRN] Health check update: 147718/443154 objects misplaced (33.333%) (OBJECT_MISPLACED)
ceph
1个回答
0
投票

尝试一下:

  1. 将osd带回(ceph osd in 2)。这将使您的集群恢复健康状态。
  2. 确保没有正在进行中的抽签过程。
  3. 设置noscrub和nodeep-srub标志以避免服务器上的额外负载。
  4. 再次设置osd 2。

如果再次挂起,请检查OSD和ceph日志以获取更多详细信息。

© www.soinside.com 2019 - 2024. All rights reserved.