仅在一个副本集中获取数据

问题描述 投票:0回答:1

我有三个副本集,并试图进行分片。

即使我在收集数据中添加500000个文档也仅进入一个分片? 。这是我第一次尝试创建碎片。

我也将getShardDistribution的输出发布在集合abcd和mybooks上

abcd集合在数据库shardingFinalDemo中,而mybooks在书中。

并且对于每个集合,它构成了整个数据集的一个块

这是sh.status()的输出

  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5e95cb79e8628e83e972957b")
  }
  shards:
        {  "_id" : "first shard",  "host" : "first shard/localhost:27011,localhost:27012,localhost:27013",  "state" : 1 }
        {  "_id" : "second shard",  "host" : "second shard/localhost:27021,localhost:27022,localhost:27023",  "state" : 1 }
        {  "_id" : "sharding 3 ",  "host" : "sharding 3 /localhost:27031,localhost:27032,localhost:27033",  "state" : 1 }
  active mongoses:
        "4.2.2" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "books",  "primary" : "second shard",  "partitioned" : true,  "version" : {  "uuid" : UUID("97774c76-23f3-4455-af44-6b1f15f849c9"),  "lastMod" : 1 } }
                books.myBooks
                        shard key: { "_id" : 1 }
                        unique: true
                        balancing: true
                        chunks:
                                second shard    1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : second shard Timestamp(1, 0) 
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                first shard 1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : first shard Timestamp(1, 0) 
        {  "_id" : "demoDb",  "primary" : "sharding 3 ",  "partitioned" : true,  "version" : {  "uuid" : UUID("41903e6a-fcff-4ad6-8618-70b0fd5b3c07"),  "lastMod" : 1 } }
                demoDb.demoShard
                        shard key: { "_id" : 1 }
                        unique: true
                        balancing: true
                        chunks:
                                sharding 3  1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : sharding 3  Timestamp(1, 0) 
        {  "_id" : "demoShard",  "primary" : "first shard",  "partitioned" : true,  "version" : {  "uuid" : UUID("49913dce-d505-42cd-9009-a67f6f002b82"),  "lastMod" : 1 } }
        {  "_id" : "shardingFinalDemo",  "primary" : "first shard",  "partitioned" : true,  "version" : {  "uuid" : UUID("61324d20-5e3b-437d-ae52-542d479fd244"),  "lastMod" : 1 } }
                shardingFinalDemo.abcd
                        shard key: { "name" : 1 }
                        unique: true
                        balancing: true
                        chunks:
                                first shard 1
                        { "name" : { "$minKey" : 1 } } -->> { "name" : { "$maxKey" : 1 } } on : first shard Timestamp(1, 0) 
db.abcd.getShardDistribution()

Shard first shard at first shard/localhost:27011,localhost:27012,localhost:27013
 data : 249KiB docs : 5000 chunks : 1
 estimated data per chunk : 249KiB
 estimated docs per chunk : 5000

Totals
 data : 249KiB docs : 5000 chunks : 1
 Shard first shard contains 100% data, 100% docs in cluster, avg obj size on shard : 51B
db.myBooks.getShardDistribution()

Shard second shard at second shard/localhost:27021,localhost:27022,localhost:27023
 data : 25.16MiB docs : 455009 chunks : 1
 estimated data per chunk : 25.16MiB
 estimated docs per chunk : 455009

Totals
 data : 25.16MiB docs : 455009 chunks : 1
 Shard second shard contains 100% data, 100% docs in cluster, avg obj size on shard : 58B

mongodb mongoose sharding replicaset
1个回答
0
投票

您似乎已选择_id作为分片键。通常,这会将连续的文档放在您似乎正在遇到的同一分片中。有关说明,请参见here

尝试:

  • 使用_id字段的哈希分片键
  • 使用其他字段作为分片键
© www.soinside.com 2019 - 2024. All rights reserved.