如何修复在使用mpi4py的并行脚本中对subprocess.Popen的调用导致的pickle.unpickling错误

问题描述 投票:0回答:1

在与mpi4py并行化的脚本中对subprocess.Popen()的重复串行调用最终会导致似乎在通信过程中发生数据损坏,表现为pickle.unpickling错误(我见过各种unpickling错误:EOF,无效的unicode)字符,无效的加载键,解开堆栈下溢)。似乎仅在通信的数据很大,对子进程的串行调用数量很大或在mpi进程数量很大时才发生。

我可以使用python> = 2.7,mpi4py> = 3.0.1和openmpi> = 3.0.0来重现错误。我最终想传达python对象,因此我使用的是小写的mpi4py方法。这是再现错误的最小代码:

#!/usr/bin/env python
from mpi4py import MPI
from copy import deepcopy
import subprocess

nr_calcs           = 4
tasks_per_calc     = 44
data_size          = 55000

# --------------------------------------------------------------------
def run_test(nr_calcs, tasks_per_calc, data_size):

    # Init MPI
    comm = MPI.COMM_WORLD
    rank = comm.Get_rank()
    comm_size = comm.Get_size()                                                                                                                             

    # Run Moc Calcs                                                                                                                                                            
    icalc = 0
    while True:
        if icalc > nr_calcs - 1: break
        index = icalc
        icalc += 1

        # Init Moc Tasks
        task_list = []
        moc_task = data_size*"x"
        if rank==0:
            task_list = [deepcopy(moc_task) for i in range(tasks_per_calc)]
        task_list = comm.bcast(task_list)

        # Moc Run Tasks
        itmp = rank
        while True:
            if itmp > len(task_list)-1: break
            itmp += comm_size
            proc = subprocess.Popen(["echo", "TEST CALL TO SUBPROCESS"],
                    stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False)
            out,err = proc.communicate()

        print("Rank {:3d} Finished Calc {:3d}".format(rank, index))

# --------------------------------------------------------------------
if __name__ == '__main__':
    run_test(nr_calcs, tasks_per_calc, data_size)

在具有44个mpi进程的一个44核心节点上运行此操作成功完成了前三个“计算”,但是在最后一个循环中,某些进程引发了:]]

Traceback (most recent call last):
  File "./run_test.py", line 54, in <module>
    run_test(nr_calcs, tasks_per_calc, data_size)
  File "./run_test.py", line 39, in run_test
    task_list = comm.bcast(task_list)
  File "mpi4py/MPI/Comm.pyx", line 1257, in mpi4py.MPI.Comm.bcast
  File "mpi4py/MPI/msgpickle.pxi", line 639, in mpi4py.MPI.PyMPI_bcast
  File "mpi4py/MPI/msgpickle.pxi", line 111, in mpi4py.MPI.Pickle.load
  File "mpi4py/MPI/msgpickle.pxi", line 101, in mpi4py.MPI.Pickle.cloads
_pickle.UnpicklingError

有时,UnpicklingError具有描述符,例如无效的加载键“ x”或EOF错误,无效的unicode字符或解栈堆栈下溢。

编辑:使用openmpi <3.0.0并使用mvapich2,问题似乎消失了,但是了解发生了什么仍然很好。

在与mpi4py并行化的脚本中对subprocess.Popen()的重复串行调用最终导致在通信过程中似乎是数据损坏,表现为pickle.unpickling错误为...

python python-3.x subprocess openmpi mpi4py
1个回答
0
投票

我有同样的问题。就我而言,通过在Python虚拟环境中安装mpi4py并按照英特尔的建议设置mpi4py.rc.recv_mprobe = False,可以使代码正常工作:https://software.intel.com/en-us/articles/python-mpi4py-on-intel-true-scale-and-omni-path-clusters

© www.soinside.com 2019 - 2024. All rights reserved.