我怎样才能减少gccgo编译的可执行文件所需要的虚拟内存?

问题描述 投票:-5回答:2

当我使用gccgo编译这个简单的Hello World例子,生成的可执行文件使用了超过VmData 800 MIB。我想知道是什么原因,如果有什么我可以做,以降低这一点。睡眠只是给我时间来观察内存使用情况。

来源:

package main

import (
  "fmt"
  "time"
)

func main() {
  fmt.Println("hello world")
  time.Sleep(1000000000 * 5)
}

该脚本我用来编译:

#!/bin/bash

TOOLCHAIN_PREFIX=i686-linux-gnu
OPTIMIZATION_FLAG="-O3"

CGO_ENABLED=1 \
CC=${TOOLCHAIN_PREFIX}-gcc-8 \
CXX=${TOOLCHAIN_PREFIX}-g++-8 \
AR=${TOOLCHAIN_PREFIX}-ar \
GCCGO=${TOOLCHAIN_PREFIX}-gccgo-8 \
CGO_CFLAGS="-g ${OPTIMIZATION_FLAG}" \
CGO_CPPFLAGS="" \
CGO_CXXFLAGS="-g ${OPTIMIZATION_FLAG}" \
CGO_FFLAGS="-g ${OPTIMIZATION_FLAG}" \
CGO_LDFLAGS="-g ${OPTIMIZATION_FLAG}" \
GOOS=linux \
GOARCH=386 \
go build -x \
   -compiler=gccgo \
   -gccgoflags=all="-static -g ${OPTIMIZATION_FLAG}" \
   $1

gccgo的版本:

$ i686-linux-gnu-gccgo-8 --version
i686-linux-gnu-gccgo-8 (Ubuntu 8.2.0-1ubuntu2~18.04) 8.2.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

从/ proc / <PID> /状态的输出:

VmPeak:  811692 kB
VmSize:  811692 kB
VmLck:        0 kB
VmPin:        0 kB
VmHWM:     5796 kB
VmRSS:     5796 kB
VmData:  807196 kB
VmStk:      132 kB
VmExe:     2936 kB
VmLib:        0 kB
VmPTE:       52 kB
VmPMD:        0 kB
VmSwap:       0 kB

我问,因为我的设备只具有的RAM 512个MIB。我知道,这是虚拟内存,但我想,以减少或者如果可能去除过量使用。它似乎没有合理的,我一个简单的可执行文件,要求多分配。

linux go virtual-memory gccgo memory-overcommitment
2个回答
0
投票

可能的原因是要链接库到代码。我的猜测是,你能够得到一个较小的逻辑地址空间,如果你明确地链接到静态库,让你得到添加到您的可执行文件的最低。在任何情况下,存在具有大逻辑地址空间最小的伤害。


0
投票

我能够找到其中gccgo是要求这么多的内存。它在libgo /去/运行/ malloc.go在mallocinit功能文件:

// If we fail to allocate, try again with a smaller arena.
// This is necessary on Android L where we share a process
// with ART, which reserves virtual memory aggressively.
// In the worst case, fall back to a 0-sized initial arena,
// in the hope that subsequent reservations will succeed.
arenaSizes := [...]uintptr{
  512 << 20,
  256 << 20,
  128 << 20,
  0,
}

for _, arenaSize := range &arenaSizes {
  // SysReserve treats the address we ask for, end, as a hint,
  // not as an absolute requirement. If we ask for the end
  // of the data segment but the operating system requires
  // a little more space before we can start allocating, it will
  // give out a slightly higher pointer. Except QEMU, which
  // is buggy, as usual: it won't adjust the pointer upward.
  // So adjust it upward a little bit ourselves: 1/4 MB to get
  // away from the running binary image and then round up
  // to a MB boundary.
  p = round(getEnd()+(1<<18), 1<<20)
  pSize = bitmapSize + spansSize + arenaSize + _PageSize
  if p <= procBrk && procBrk < p+pSize {
    // Move the start above the brk,
    // leaving some room for future brk
    // expansion.
    p = round(procBrk+(1<<20), 1<<20)
  }
  p = uintptr(sysReserve(unsafe.Pointer(p), pSize, &reserved))
  if p != 0 {
    break
  }
}
if p == 0 {
  throw("runtime: cannot reserve arena virtual address space")
}

有趣的是,它回落到更小尺寸的舞台,如果较大的失败。所以限制了虚拟内存来一去执行,实际上限制了它多少会成功分配。

我能够使用ulimit -v 327680虚拟内存限制在更小的数字:

VmPeak:   300772 kB
VmSize:   300772 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:      5712 kB
VmRSS:      5712 kB
VmData:   296276 kB
VmStk:       132 kB
VmExe:      2936 kB
VmLib:         0 kB
VmPTE:        56 kB
VmPMD:         0 kB
VmSwap:        0 kB

这些仍然是很大的数字,但一个gccgo可执行文件可以达到最佳状态。所以,这个问题的答案是,是的,你可以减少VmData一个gccgo编译成可执行的,但你真的不应该担心。 (在64位机器gccgo尝试分配512 GB)。

© www.soinside.com 2019 - 2024. All rights reserved.