2G是Linux上coredump文件大小的限制吗?

问题描述 投票:0回答:2

我的操作系统是

Arch Linux
。当有 coredump 时,我尝试使用 gdb 来调试它:

$ coredumpctl gdb 1621
......
       Storage: /var/lib/systemd/coredump/core.runTests.1014.b43166f4bba84bcba55e65ae9460beff.1621.1491901119000000000000.lz4
       Message: Process 1621 (runTests) of user 1014 dumped core.

                Stack trace of thread 1621:
                #0  0x00007ff1c0fcfa10 n/a (n/a)

GNU gdb (GDB) 7.12.1
......
Reading symbols from /home/xiaonan/Project/privDB/build/bin/runTests...done.
BFD: Warning: /var/tmp/coredump-28KzRc is truncated: expected core file size >= 2179375104, found: 2147483648.

我检查

/var/tmp/coredump-28KzRc
文件:

$ ls -alth /var/tmp/coredump-28KzRc
-rw------- 1 xiaonan xiaonan 2.0G Apr 11 17:00 /var/tmp/coredump-28KzRc

2G
是Linux上coredump文件的大小限制吗?因为我认为我的
/var/tmp
有足够的磁盘空间可以使用:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
dev              32G     0   32G   0% /dev
run              32G  3.1M   32G   1% /run
/dev/sda2       229G   86G  132G  40% /
tmpfs            32G  708M   31G   3% /dev/shm
tmpfs            32G     0   32G   0% /sys/fs/cgroup
tmpfs            32G  957M   31G   3% /tmp
/dev/sda1       511M   33M  479M   7% /boot
/dev/sda3       651G  478G  141G  78% /home

附注“

ulimit -a
”输出:

$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 257039
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 257039
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

更新:

/etc/systemd/coredump.conf
文件:

$ cat coredump.conf
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See coredump.conf(5) for details.

[Coredump]
#Storage=external
#Compress=yes
#ProcessSizeMax=2G
#ExternalSizeMax=2G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=
linux gdb archlinux coredump
2个回答
6
投票

@n.m。是正确的。
(1) 修改

/etc/systemd/coredump.conf
文件:

[Coredump]
ProcessSizeMax=8G
ExternalSizeMax=8G
JournalSizeMax=8G

(2) 重新加载 systemd 的配置:

# systemctl daemon-reload

请注意,这仅对新生成的核心转储文件生效。


2
投票

2G是Linux上coredump文件大小的限制吗?

不。我经常处理大于 4GiB 的核心转储。

ulimit -a

core file size          (blocks, -c) unlimited

这会告诉您此 shell 中的“当前”限制。它不会告诉您有关 runTests 运行环境的任何信息。该进程可能会通过 setrlimit(2) 设置自己的限制,或者其父进程可能会为其设置限制。


您可以修改 runTest 以使用
getrlimit(2)
打印其当前限制,并查看进程运行时的实际情况。

附注仅仅因为

core
被截断并不意味着它完全无用(尽管通常是这样)。至少,您应该尝试 GDB

where

命令。

    

© www.soinside.com 2019 - 2024. All rights reserved.