PostgreSQL:一个索引很快,另一个索引很慢

问题描述 投票:3回答:2

我们有一个包含4.5亿行的数据库,结构如下:

   uid      id_1     id_2   d1  d2  d3  d4  d5  d6  d7  d8  d9  d10 d11 d12 d13 d14 d15 d16 d17
81038392    5655067 5468882 373 117 185 152 199 173 168 138 185 159 154 34  38  50  34  41  57
81038393    5655067 5468883 374 116 184 118 170 143 144 113 164 137 138 37  39  53  37  42  60
81038394    5655067 5468884 371 118 187 118 170 143 144 105 157 131 136 32  35  47  32  39  53
81038395    5655067 5468885 370 116 184 118 170 143 144 105 157 131 136 31  35  46  31  38  53
81038396    5655067 5468886 370 117 185 118 170 143 144 105 157 131 136 29  34  44  29  37  50
81038397    5655067 5470853 368 117 185 110 163 137 140 105 157 131 136 34  36  48  34  39  55
81038398    5655067 5470854 372 119 188 118 170 143 144 113 164 137 138 34  36  49  34  40  55
81038399    5655067 5470855 360 115 182 103 151 131 136 98  145 125 131 30  34  45  30  38  51
81038400    5655067 5470856 357 112 177 103 151 131 136 98  145 125 131 30  34  45  30  37  51
81038401    5655067 5470857 356 111 176 103 151 131 136 98  145 125 131 28  33  43  28  36  50
81038402    5655067 5470858 358 113 179 103 151 131 136 98  145 125 131 31  35  46  31  38  52
81038403    5655067 5472811 344 109 173 152 199 173 168 138 185 159 154 31  36  46  31  39  52
81038404    5655068 5468882 373 117 185 152 199 173 168 138 185 159 154 34  38  50  34  41  57
81038405    5655068 5468883 374 116 184 118 170 143 144 113 164 137 138 37  39  53  37  42  60
81038406    5655068 5468884 371 118 187 118 170 143 144 105 157 131 136 32  35  47  32  39  53
81038407    5655068 5468885 370 116 184 118 170 143 144 105 157 131 136 31  35  46  31  38  53
81038408    5655068 5468886 370 117 185 118 170 143 144 105 157 131 136 29  34  44  29  37  50
81038409    5655068 5470853 368 117 185 110 163 137 140 105 157 131 136 34  36  48  34  39  55
81038410    5655068 5470854 372 119 188 118 170 143 144 113 164 137 138 34  36  49  34  40  55
81038411    5655068 5470855 360 115 182 103 151 131 136 98  145 125 131 30  34  45  30  38  51
81038412    5655068 5470856 357 112 177 103 151 131 136 98  145 125 131 30  34  45  30  37  51
81038413    5655068 5470857 356 111 176 103 151 131 136 98  145 125 131 28  33  43  28  36  50
81038414    5655068 5470858 358 113 179 103 151 131 136 98  145 125 131 31  35  46  31  38  52

我们需要经常做这样的查询:

查询1:

EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM mytable WHERE id_1 = 5655067;

                                                               QUERY PLAN                                                                
-----------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using id_1_idx on mytable (cost=0.57..99187.68 rows=25742 width=80) (actual time=47.081..2600.899 rows=21487 loops=1)
   Index Cond: (id_1 = 5655067)
   Buffers: shared hit=9 read=4816
   I/O Timings: read=2563.181
 Planning time: 0.151 ms
 Execution time: 2602.320 ms
(6 rows)

查询2:

EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM mytable WHERE id_2 = 5670433;

                                                            QUERY PLAN                                                             
-----------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on mytable (cost=442.02..89887.42 rows=23412 width=80) (actual time=113.200..42127.512 rows=21487 loops=1)
   Recheck Cond: (id_2 = 5670433)
   Heap Blocks: exact=16988
   Buffers: shared hit=30 read=17020
   I/O Timings: read=41971.798
   ->  Bitmap Index Scan on id_2_idx  (cost=0.00..436.16 rows=23412 width=0) (actual time=104.928..104.929 rows=21487 loops=1)
         Index Cond: (id_2 = 5670433)
         Buffers: shared hit=2 read=60
         I/O Timings: read=99.235
 Planning time: 0.163 ms
 Execution time: 42132.556 ms
(11 rows)

大约有23 000到25 000个独特的id_1id_2值,并且这两个查询总是返回大约24 000行数据。我们只读数据,数据不会随时间而变化。

问题:

  • 查询1需要大约3秒,这有点多但仍然可以忍受。
  • 查询2需要30-40秒,这对我们来说太过分了,因为服务是交互式Web服务。

我们已将id_1id_2编入索引。我们还在id_1id_2上添加了一个联合索引,因为这是Azure PostgreSQL作为服务平台所建议的数据所在。它没有帮助。

我的假设是查询1很快,因为所有行都按顺序位于数据库中,而当使用查询2时,行总是按顺序分布在整个数据库中。

重构数据以加速查询2并不是一个好主意,因为这会降低查询1的性能。我理解这种数据结构的方式并不理想,但我无法控制它。有什么建议我如何将查询2加速到合理的水平?

编辑2:

创建索引语句:

CREATE INDEX id_1_idx ON mytable (id_1);
CREATE INDEX id_2_idx ON mytable (id_2);

吸尘表没有改变计划。吸尘后,EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM mytable WHERE id_1 = 5655067的输出非常相似。以下是详细真空的输出:

VACUUM (VERBOSE, ANALYZE) mytable;

INFO:  vacuuming "public.mytable"
INFO:  index "mytable_pkey" now contains 461691169 row versions in 1265896 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 2695.21 s.
INFO:  index "id_1_idx" now contains 461691169 row versions in 1265912 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 1493.20 s.
INFO:  index "id_2_idx" now contains 461691169 row versions in 1265912 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 1296.06 s.
INFO:  index "mytable_id_1_id_2_idx" now contains 461691169 row versions in 1265912 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 2364.16 s.
INFO:  "mytable": found 0 removable, 389040319 nonremovable row versions in 5187205 out of 6155883 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 12767
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 13560.60 s.
INFO:  analyzing "public.mytable"
INFO:  "mytable": scanned 30000 of 6155883 pages, containing 2250000 live rows and 0 dead rows; 30000 rows in sample, 461691225 estimated total rows
VACUUM
postgresql indexing
2个回答
1
投票

TL; DR

存储I / O是您的主要瓶颈+没有足够的RAM用于索引,因为您可以自己计算:

对于位图堆扫描,您可以计算平均块读取延迟~2.5毫秒(在41971.798 ms中读取17020个块),这太慢了。

避免磁盘读取的唯一方法是使用大量RAM。更快的存储将使系统更具可扩展性,因为这可能不是唯一的查询类型,也不是数据库中唯一的表。

长版:

读取EXPLAIN的完美输出表明,计划程序完成的成本评估已经过时,性能下降来自磁盘读取。

正如您所写的那样,数据不会随着时间的推移而变化(因此,您事先知道值范围),您还可以在这两列上对表进行范围分区,然后只需扫描某个分区(使用较小的索引) ,阅读较小的表堆)。但是,如果访问这些数据的应用程序最终访问的数据量大于或小于此数,那么这也无济于事。

因此,您应该考虑更换存储子系统,以便能够在应用程序具有的性能要求范围内处理您的查询。

我怀疑PostgreSQL服务器仍然在HDD而不是SSD上运行。仅有120M行的小测试显示两个索引的以下特征:

create table nums (uid integer primary key, id_1 integer, id_2 integer, d1 integer, d2 integer, d3 integer, d4 integer, d5 integer, d6 integer, d7 integer, d8 integer, d9 integer, d10 integer, d11 integer, d12 integer, d13 integer, d14 integer, d15 integer, d16 integer, d17 integer);

INSERT INTO nums select generate_series(80000001, 200000000) AS uid, (random() * 23000)::integer + 5600000 AS id_1, (random() * 25000)::integer + 5600000 AS id_2, (random() * 1000)::integer AS d1, (random() * 1000)::integer AS d2, (random() * 1000)::integer AS d3, (random() * 1000)::integer AS d4, (random() * 1000)::integer AS d5, (random() * 1000)::integer AS d6, (random() * 1000)::integer AS d7, (random() * 1000)::integer AS d8, (random() * 1000)::integer AS d9, (random() * 1000)::integer AS d10, (random() * 1000)::integer AS d11, (random() * 100)::integer AS d12, (random() * 100)::integer AS d13, (random() * 100)::integer AS d14, (random() * 100)::integer AS d15, (random() * 100)::integer AS d16, (random() * 100)::integer AS d17;

create index id_1_idx on nums (id_1);
create index id_2_idx on nums (id_2);
cluster nums using id_1_idx;

...导致以下(两个冷读):

explain (analyze, buffers) select * from nums where id_1 = 5606001;
                                                       QUERY PLAN                                                        
-------------------------------------------------------------------------------------------------------------------------
 Index Scan using id_1_idx on nums  (cost=0.57..5816.92 rows=5198 width=80) (actual time=1.680..6.394 rows=5185 loops=1)
   Index Cond: (id_1 = 5606001)
   Buffers: shared read=88
   I/O Timings: read=4.397
 Planning Time: 4.002 ms
 Execution Time: 7.475 ms
(6 rows)

Time: 15.924 ms

......和id_2

explain (analyze, buffers) select * from nums where id_2 = 5606001; 
                                                        QUERY PLAN                                                         
---------------------------------------------------------------------------------------------------------------------------
 Index Scan using id_2_idx on nums  (cost=0.57..5346.53 rows=4777 width=80) (actual time=0.376..985.689 rows=4748 loops=1)
   Index Cond: (id_2 = 5606001)
   Buffers: shared hit=1 read=4755
   I/O Timings: read=972.555
 Planning Time: 0.203 ms
 Execution Time: 986.590 ms
(6 rows)

Time: 987.296 ms

所以虽然我的表是“只是”12 GiB + 3x 2.5 GiB(PK + 2索引)仍然足够快。

如果服务器已经在SSD上运行,请确保(物理上)将WAL / log,表数据(表空间),索引(表空间)的数据存储分开,以尽可能地从并行中获益并减少I / O干扰由同一系统上的其他服务/应用程序引起的。

还要考虑一个服务器系统,它具有更多的内存用于表和索引数据(对于这个~48 GiB表+每个索引约10 GiB,假设所有integer列),然后进行预热以将数据从磁盘推送到内存。至少索引应该能够完全留在内存中。

编辑:我的服务器不使用位图(索引+堆)扫描的原因是因为我在SSD上运行,我已经调整了从默认的41.1的随机页面成本。对于HDD系统,当然没有任何意义。

编辑#2:对情况的重新测试揭示了一个有趣的行为:

在我的测试中,我假设第一列uid是主键列并且是serial(顺序整数),通过该列,条目最初在磁盘上排序。在生成数据时,两个有趣的索引列id_1id_2的值是随机生成的,这通常最终成为大表的最坏情况。

但是,在这种情况下并非如此。在创建测试数据和索引之后,在分析表之后但在使用列id_1上的索引进行数据重新排序之前,我现在得到这些结果:

explain (analyze, buffers) select * from nums where id_1 = 5606001;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on nums  (cost=63.32..7761.68 rows=5194 width=80) (actual time=1.978..41.007 rows=5210 loops=1)
   Recheck Cond: (id_1 = 5606001)
   Heap Blocks: exact=5198
   Buffers: shared read=5217
   I/O Timings: read=28.732
   ->  Bitmap Index Scan on id_1_idx  (cost=0.00..62.02 rows=5194 width=0) (actual time=1.176..1.176 rows=5210 loops=1)
         Index Cond: (id_1 = 5606001)
         Buffers: shared read=19
         I/O Timings: read=0.124
 Planning Time: 7.214 ms
 Execution Time: 41.419 ms
(11 rows)

...和:

explain (analyze, buffers) select * from nums where id_2 = 5606001;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on nums  (cost=58.52..7133.04 rows=4768 width=80) (actual time=7.305..43.830 rows=4813 loops=1)
   Recheck Cond: (id_2 = 5606001)
   Heap Blocks: exact=4805
   Buffers: shared hit=12 read=4810
   I/O Timings: read=28.181
   ->  Bitmap Index Scan on id_2_idx  (cost=0.00..57.33 rows=4768 width=0) (actual time=5.102..5.102 rows=4813 loops=1)
         Index Cond: (id_2 = 5606001)
         Buffers: shared read=17
         I/O Timings: read=2.414
 Planning Time: 0.227 ms
 Execution Time: 44.197 ms
(11 rows)

所有计划+优化都可在此处获得:

我也遵循自己的最佳实践,并将索引分离到不同物理SSD上的另一个表空间。

正如我们所看到的,要获取~5000个结果行,它必须在这里读取或多或少相同数量的块,在这两种情况下都使用位图堆扫描。

在这种情况下,两列的相关性:

 attname | correlation | n_distinct 
---------+-------------+------------
 id_1    |  -0.0047043 |      23003
 id_2    |  0.00157998 |      25004

现在,在CLUSTER ... USING id_1_idx之后重新测试查询并重新分析之后,产生以下相关性:

 attname | correlation  | n_distinct 
---------+--------------+------------
 id_1    |            1 |      22801
 id_2    | -0.000898521 |      24997

......透露了以下表现:

explain (analyze, buffers) select * from nums where id_1 = 5606001;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Index Scan using id_1_idx on nums  (cost=0.57..179.02 rows=5083 width=80) (actual time=2.604..5.256 rows=5210 loops=1)
   Index Cond: (id_1 = 5606001)
   Buffers: shared read=90
   I/O Timings: read=4.107
 Planning Time: 4.039 ms
 Execution Time: 5.563 ms
(6 rows)

......哪个更好 - 正如预期的那样 - 但是:

explain (analyze, buffers) select * from nums where id_2 = 5606001;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on nums  (cost=58.57..7140.12 rows=4775 width=80) (actual time=5.866..99.707 rows=4813 loops=1)
   Recheck Cond: (id_2 = 5606001)
   Heap Blocks: exact=4806
   Buffers: shared read=4823
   I/O Timings: read=31.389
   ->  Bitmap Index Scan on id_2_idx  (cost=0.00..57.38 rows=4775 width=0) (actual time=2.992..2.992 rows=4813 loops=1)
         Index Cond: (id_2 = 5606001)
         Buffers: shared read=17
         I/O Timings: read=0.338
 Planning Time: 0.210 ms
 Execution Time: 100.155 ms
(11 rows)

......速度超过两倍,尽管几乎完全相同数量的块必须像第一次随机运行一样被读取。

它为什么这么慢?

使用索引id_1_idx对表数据进行物理重新排序也会影响列的物理顺序。现在,位图堆扫描的目的是获取要从位图索引扫描以物理(磁盘)顺序读取的块列表。在第一种情况下(随机),很有可能多个行匹配位于磁盘上连续块中的条件,导致随机磁盘访问次数减少。

有趣的是(但这可能只是因为我在SSD上运行),禁用位图扫描显示可接受的数字:

explain (analyze, buffers) select * from nums where id_2 = 5606001;
                                                        QUERY PLAN                                                        
--------------------------------------------------------------------------------------------------------------------------
 Index Scan using id_2_idx on nums  (cost=0.57..7257.12 rows=4775 width=80) (actual time=0.151..35.453 rows=4813 loops=1)
   Index Cond: (id_2 = 5606001)
   Buffers: shared read=4823
   I/O Timings: read=30.051
 Planning Time: 1.927 ms
 Execution Time: 35.810 ms
(6 rows)

所有这些数字几乎都是完整的冷启动执行(你可以看到没有或非常低的Buffers: shared hit数字。

有趣的是,位图扫描和id_2的索引扫描之间的I / O时序非常相似,但位图扫描似乎在这里引入了巨大的开销。


0
投票

区别在于id_1高度相关,即该列的顺序对应于行的物理顺序,而id_2不相关。

测试用

SELECT attname, correlation
FROM pg_stats
WHERE tablename = 'mytable'
  AND attname IN ('id_1', 'id_2');

如果相关性很高,则列的单个值的行将位于表的几个相邻块中。如果相关性较低,则行将遍布表,并且必须读取更多块。

要实现高度相关,可以使用CLUSTER语句重写表以重新排序行。如果没有删除和更新,则将按插入顺序对表进行物理排序。

您可以加快一个查询或另一个查询,但不能同时加速。

© www.soinside.com 2019 - 2024. All rights reserved.