[超过两周以来,我观察到我的RDS实例(db.t3.small
上的PostgreSQL 10.6)每天在工作时间内有2个小时的CPU高峰,并且读写延迟增加,导致响应速度慢或超时在我的应用程序中。
我确实进行了调查(请参阅下文),至此,我非常确信这些影响用户的高峰不是由我的使用引起的,并且往往认为它们是由RDS的流氓管理任务或某些PostgreSQL问题引起的。
有人忍受并解决了PostgreSQL的类似问题吗?有人可以帮助我调查RDS管理员任务方面吗?还是将我指向其他途径以深入探讨这些问题?
我观察到的内容:
我调查的内容:
这里是高峰开始前后的基本日志(在激活语句日志之前:
2019-12-09 15:04:05 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:04:05 UTC::@:[4221]:LOG: checkpoint complete: wrote 2 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.202 s, sync=0.001 s, total=0.213 s; sync files=2, longest=0.001 s, average=0.000 s; distance=16369 kB, estimate=16395 kB 2019-12-09 15:09:05 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:09:05 UTC::@:[4221]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.101 s, sync=0.001 s, total=0.112 s; sync files=1, longest=0.001 s, average=0.001 s; distance=16384 kB, estimate=16394 kB 2019-12-09 15:14:05 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:14:05 UTC::@:[4221]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.101 s, sync=0.002 s, total=0.113 s; sync files=1, longest=0.002 s, average=0.002 s; distance=16384 kB, estimate=16393 kB 2019-12-09 15:19:06 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:19:06 UTC::@:[4221]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.101 s, sync=0.001 s, total=0.113 s; sync files=1, longest=0.001 s, average=0.001 s; distance=16384 kB, estimate=16392 kB [CPU PEAK STARTS here that day, at 16:20 UPC+1] 2019-12-09 15:24:06 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:24:06 UTC::@:[4221]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.101 s, sync=0.002 s, total=0.114 s; sync files=1, longest=0.002 s, average=0.002 s; distance=16384 kB, estimate=16391 kB 2019-12-09 15:29:06 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:29:06 UTC::@:[4221]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.101 s, sync=0.002 s, total=0.113 s; sync files=1, longest=0.001 s, average=0.001 s; distance=16384 kB, estimate=16390 kB 2019-12-09 15:34:06 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:34:06 UTC::@:[4221]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.103 s, sync=0.002 s, total=0.118 s; sync files=1, longest=0.002 s, average=0.002 s; distance=16384 kB, estimate=16390 kB 2019-12-09 15:39:06 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:39:06 UTC::@:[4221]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.104 s, sync=0.003 s, total=0.127 s; sync files=1, longest=0.002 s, average=0.002 s; distance=16384 kB, estimate=16389 kB 2019-12-09 15:44:06 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:44:06 UTC::@:[4221]:LOG: checkpoint complete: wrote 2 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.219 s, sync=0.010 s, total=0.303 s; sync files=2, longest=0.010 s, average=0.005 s; distance=16392 kB, estimate=16392 kB 2019-12-09 15:49:07 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:49:09 UTC::@:[4221]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.318 s, sync=0.516 s, total=2.426 s; sync files=1, longest=0.516 s, average=0.516 s; distance=16375 kB, estimate=16390 kB 2019-12-09 15:54:07 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:54:09 UTC::@:[4221]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.367 s, sync=1.230 s, total=2.043 s; sync files=1, longest=1.230 s, average=1.230 s; distance=16384 kB, estimate=16389 kB 2019-12-09 15:59:07 UTC::@:[4221]:LOG: checkpoint starting: time 2019-12-09 15:59:08 UTC::@:[4221]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.139 s, sync=0.195 s, total=1.124 s; sync files=1, longest=0.195 s, average=0.195 s; distance=16383 kB, estimate=16389 kB
[CPU around 1 peak,CPU over a week,Read latency around a peak,Write latency around a peak,Performance Insights around Dec 10 peak,Performance Insights around Dec 9 peak
[超过2周以来,我观察到我的RDS实例(db.t3.small上的PostgreSQL 10.6)在工作时间内每天有2个小时的CPU高峰,并且读写延迟增加,这导致...
可能是由于PostgreSQL的后台进程而导致磁盘上的突发信用用尽。如果我没记错的话,Rds上的所有磁盘都是gp2类型。意味着您有一定的基本知识和功劳,可以花一小段时间就超过该水平。您应该能够在监视页面的io队列中看到这种效果。如果出现这种情况,您应该会看到队列中的操作数量达到峰值。最简单的解决方案是仅增加磁盘大小。