错误:_slurm_rpc_node_registration 节点=xxxxx:参数无效

问题描述 投票:0回答:2

我正在尝试设置 Slurm - 我只有一个登录节点(称为 ctm-login-01)和一个计算节点(称为 ctm-deep-01)。我的计算节点有多个 CPU 和 3 个 GPU。

我的计算节点一直处于

drain
状态,我一生都无法弄清楚从哪里开始......


登录节点

信息

ctm-login-01:~$ sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*       up   infinite      1  drain ctm-deep-01

原因?

sinfo-R

ctm-login-01:~$ sinfo -R
REASON               USER      TIMESTAMP           NODELIST
gres/gpu count repor slurm     2020-12-11T15:56:55 ctm-deep-01

确实,我不断收到这些错误消息

/var/log/slurm-llnl/slurmctld.log
:

/var/log/slurm-llnl/slurmctld.log

[2020-12-11T16:17:39.857] gres/gpu: state for ctm-deep-01
[2020-12-11T16:17:39.857]   gres_cnt found:0 configured:3 avail:3 alloc:0
[2020-12-11T16:17:39.857]   gres_bit_alloc:NULL
[2020-12-11T16:17:39.857]   gres_used:(null)
[2020-12-11T16:17:39.857] error: _slurm_rpc_node_registration node=ctm-deep-01: Invalid argument

(请注意,我已将

slurm.conf
调试设置为
verbose
,并设置
DebugFlags=Gres
以获取有关 GPU 的更多详细信息。)

这些是我在所有节点中的配置文件及其一些内容...

/etc/slurm-llnl/* 文件

ctm-login-01:/etc/slurm-llnl$ ls
cgroup.conf  cgroup_allowed_devices_file.conf  gres.conf  plugstack.conf  plugstack.conf.d  slurm.conf
ctm-login-01:/etc/slurm-llnl$ tail slurm.conf 
#SuspendTime=
#
#
# COMPUTE NODES
GresTypes=gpu
NodeName=ctm-deep-01 Gres=gpu:3 CPUs=24 Sockets=1 CoresPerSocket=12 ThreadsPerCore=2 State=UNKNOWN
PartitionName=debug Nodes=ctm-deep-01 Default=YES MaxTime=INFINITE State=UP

# default
SallocDefaultCommand="srun --gres=gpu:1 $SHELL"
ctm-deep-01:/etc/slurm-llnl$ cat gres.conf 
NodeName=ctm-login-01 Name=gpu File=/dev/nvidia0 CPUs=0-23
NodeName=ctm-login-01 Name=gpu File=/dev/nvidia1 CPUs=0-23
NodeName=ctm-login-01 Name=gpu File=/dev/nvidia2 CPUs=0-23
ctm-login-01:/etc/slurm-llnl$ cat cgroup.conf 
CgroupAutomount=yes 
CgroupReleaseAgentDir="/etc/slurm-llnl/cgroup" 

ConstrainCores=yes 
ConstrainDevices=yes
ConstrainRAMSpace=yes
#TaskAffinity=yes
ctm-login-01:/etc/slurm-llnl$ cat cgroup_allowed_devices_file.conf 
/dev/null
/dev/urandom
/dev/zero
/dev/sda*
/dev/cpu/*/*
/dev/pts/*
/dev/nvidia*

计算节点

我的计算节点中的日志如下。

/var/log/slurm-llnl/slurmd.log

ctm-deep-01:~$ sudo tail /var/log/slurm-llnl/slurmd.log 
[2020-12-11T15:54:35.787] Munge credential signature plugin unloaded
[2020-12-11T15:54:35.788] Slurmd shutdown completing
[2020-12-11T15:55:53.433] Message aggregation disabled
[2020-12-11T15:55:53.436] topology NONE plugin loaded
[2020-12-11T15:55:53.436] route default plugin loaded
[2020-12-11T15:55:53.440] task affinity plugin loaded with CPU mask 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000ffffff
[2020-12-11T15:55:53.440] Munge credential signature plugin loaded
[2020-12-11T15:55:53.441] slurmd version 19.05.5 started
[2020-12-11T15:55:53.442] slurmd started on Fri, 11 Dec 2020 15:55:53 +0000
[2020-12-11T15:55:53.443] CPUs=24 Boards=1 Sockets=1 Cores=12 Threads=2 Memory=128754 TmpDisk=936355 Uptime=26 CPUSpecList=(null) FeaturesAvail=(null) FeaturesActive=(null)

CPU 掩码亲和力看起来很奇怪......

请注意,我已经打电话给

sudo nvidia-smi --persistence-mode=1
。另请注意,上述
gres.conf
文件似乎是正确的:

nvidia-smi 拓扑-m

ctm-deep-01:/etc/slurm-llnl$ sudo nvidia-smi topo -m
        GPU0  GPU1  GPU2  CPU Affinity  NUMA Affinity
GPU0     X    SYS   SYS   0-23          N/A
GPU1    SYS    X    PHB   0-23          N/A
GPU2    SYS   PHB    X    0-23          N/A

我还应该从其他日志或配置中获取线索吗?谢谢!

slurm
2个回答
2
投票

这一切都是因为打字错误!

ctm-deep-01:/etc/slurm-llnl$ cat gres.conf 
NodeName=ctm-login-01 Name=gpu File=/dev/nvidia0 CPUs=0-23
NodeName=ctm-login-01 Name=gpu File=/dev/nvidia1 CPUs=0-23
NodeName=ctm-login-01 Name=gpu File=/dev/nvidia2 CPUs=0-23

显然,那应该是

NodeName=ctm-deep-01
,这是我的计算节点!天哪...


0
投票

添加新的 GPU 计算节点后,我在

/var/log/slurmctld
中看到了同样的情况。

[2023-08-10T09:50:19.072] error: _slurm_rpc_node_registration node=c0236: Invalid argument

事实证明,我在新节点完全启动后将 GPU 添加到了

/etc/slurm/gres.conf

NodeName=c[0236-0237] Name=gpu Type=ampere File=/dev/nvidia0 CPUs=[0-63]
NodeName=c[0236-0237] Name=gpu Type=ampere File=/dev/nvidia1 CPUs=[64-127]

a

sudo systemctl restart slurmctld && sudo scontrol reconfigure
还不够。

我在节点上重新启动

slurmd
slurmctld
日志显示以下内容,之后
invalid argument
停止记录

[2023-08-10T09:51:58.765] gres/gpu: count changed for node c0236 from 0 to 2

根据 [https://slurm.schedmd.com/gres.conf.html][SchedMD 的 gres.conf] 文档,当使用

gres.conf
时,
reconfigure
slurmd 应该重新读取文件。也许是因为我们还在 18.08,直到后来才添加。

对于我们的新节点,我们需要在添加 Gres 条目后重新启动 slurmd。

© www.soinside.com 2019 - 2024. All rights reserved.