下一段代码有什么问题?这段代码是在 2007 年 (MPICH-2) 中编写的并且没有任何问题.. 今天 (MPICH-4) 完全相同的代码不起作用......是否对导致问题的 MPICH 规范进行了任何修改?非常感谢
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#define DIM 4
int main (int argc, char ** argv) {
int i, oldRank, oldSize, a = 10;
double buf; double b [DIM] = {0.125, 0.450, 1.375, 2.225};
int ranks1[DIM], ranks2[DIM];
MPI_Group commGroup, newGroup1, newGroup2;
int newRank1, newRank2;
MPI_Comm newComm1, newComm2;
MPI_Init (&argc, &argv);
MPI_Comm_rank (MPI_COMM_WORLD, &oldRank);
MPI_Comm_size (MPI_COMM_WORLD, &oldSize);
if (oldSize!=2*DIM) {
if (oldRank==0)
printf ("Invalid process number. Aborting...\n");
MPI_Finalize ();
return (-1); }
for (i=0;i<DIM;i++) {
ranks1[i] = 2*i;
ranks2[i] = 2*i+1; }
MPI_Comm_group (MPI_COMM_WORLD, &commGroup);
MPI_Group_incl (commGroup, DIM, ranks1, &newGroup1);
MPI_Group_incl (commGroup, DIM, ranks2, &newGroup2);
MPI_Comm_create (MPI_COMM_WORLD, newGroup1, &newComm1);
MPI_Comm_create (MPI_COMM_WORLD, newGroup2, &newComm2);
if (newComm1) MPI_Comm_rank (newComm1, &newRank1);
if (newComm2) MPI_Comm_rank (newComm2, &newRank2);
if (newComm1) {
MPI_Bcast (&a, 1, MPI_INT, 0, newComm1);
printf ("Process %d of group %d received a value of %d from process 0\n",
newRank1, newGroup1, a);}
if (newComm2) {
MPI_Scatter (&b, 1, MPI_DOUBLE, &buf, 1, MPI_DOUBLE, 2, newComm2);
printf ("Process %d of group %d received a value of %f from process 2\n",
newRank2, newGroup2, buf); }
MPI_Group_free (&newGroup1);
if (newComm1) MPI_Comm_free (&newComm1);
MPI_Group_free (&newGroup2);
if (newComm2) MPI_Comm_free (&newComm2);
MPI_Finalize ();
return (0); }
错误输出如下:
Abort(66671365) on node 0 (rank 0 in comm 0): Fatal error in internal_Comm_rank: Invalid communicator, error stack:
internal_Comm_rank(74): MPI_Comm_rank(MPI_COMM_NULL, rank=0x7ffc0f620198) failed
internal_Comm_rank(41): Null communicator
Abort(200889093) on node 1 (rank 1 in comm 0): Fatal error in internal_Comm_rank: Invalid communicator, error stack:
internal_Comm_rank(74): MPI_Comm_rank(MPI_COMM_NULL, rank=0x7fffdad13684) failed
internal_Comm_rank(41): Null communicator
Abort(66671365) on node 2 (rank 2 in comm 0): Fatal error in internal_Comm_rank: Invalid communicator, error stack:
internal_Comm_rank(74): MPI_Comm_rank(MPI_COMM_NULL, rank=0x7ffc55210568) failed
internal_Comm_rank(41): Null communicator
Abort(200889093) on node 3 (rank 3 in comm 0): Fatal error in internal_Comm_rank: Invalid communicator, error stack:
internal_Comm_rank(74): MPI_Comm_rank(MPI_COMM_NULL, rank=0x7ffcc93ede14) failed
internal_Comm_rank(41): Null communicator
Abort(939086597) on node 4 (rank 4 in comm 0): Fatal error in internal_Comm_rank: Invalid communicator, error stack:
internal_Comm_rank(74): MPI_Comm_rank(MPI_COMM_NULL, rank=0x7ffe50227c08) failed
internal_Comm_rank(41): Null communicator
Abort(804868869) on node 5 (rank 5 in comm 0): Fatal error in internal_Comm_rank: Invalid communicator, error stack:
internal_Comm_rank(74): MPI_Comm_rank(MPI_COMM_NULL, rank=0x7ffcf4d4ba24) failed
internal_Comm_rank(41): Null communicator
Abort(402215685) on node 6 (rank 6 in comm 0): Fatal error in internal_Comm_rank: Invalid communicator, error stack:
internal_Comm_rank(74): MPI_Comm_rank(MPI_COMM_NULL, rank=0x7ffc2bcb1c08) failed
internal_Comm_rank(41): Null communicator
Abort(603542277) on node 7 (rank 7 in comm 0): Fatal error in internal_Comm_rank: Invalid communicator, error stack:
internal_Comm_rank(74): MPI_Comm_rank(MPI_COMM_NULL, rank=0x7ffc611c4bc4) failed
internal_Comm_rank(41): Null communicator
似乎以某种方式涉及 NULL 通信器,但我处理了这个问题,因为仅当 newComm1 和 newComm2 不为 NULL 时才执行命令。