使用VTK的并行XML编写器进行并行输出

问题描述 投票:0回答:1

我目前正在使用 VTK 通过 C++ 中的 vtkXMLStructuredGridWriter 将结构化网格子域上的模拟数据写入 .vts 文件。到目前为止,这对我来说非常有用。现在我还想编写并行文件,其中包含所有部分的信息,而这部分却没有记录。

我的目标是将单个进程的所有连续文件收集在一个专用文件夹中,以进程的级别命名,例如“0/mydata_.vts”。然后,并行文件应位于父文件夹中,并命名为“mydata_.pvts”。原则上,我可以使用任何 xml 实用程序编写一个并行文件来完成这项工作。但为了一致性,我更喜欢使用 vtk 库。我找到了 thisthis 示例来了解它应该如何工作,但没有取得很大成功。 Stackoverflow 上也提到了第一个示例。

它确实编写了一个并行文件,其中包含以下信息:

  • 件数
  • 碎片中的数据集
  • 文件夹中片段的位置

但它也确实如此:

  • 在与“test_.pvts”文件相同的文件夹中创建其他“*.vts”文件
  • 将整个域和每个部分的范围设置为根子域的范围(等级为0的过程)

我也对这个平行作家应该如何工作感到困惑。如果像示例中那样仅在根进程上调用它,那么它如何知道总子域的范围?或者,如果我应该在每个进程上并行调用它,它将如何避免对文件的并发访问?

最后:有谁知道这个并行 XML 编写器应该如何使用?或者也许知道一个可以学习的好例子?

我使用以下代码来编写文件:

#include <vtkNew.h>
#include <vtkStructuredGrid.h>
#include <vtkXMLStructuredGridWriter.h>
#include <vtkXMLPStructuredGridWriter.h>
class MyPwriter : public vtkXMLPStructuredGridWriter {
public:
    static MyPwriter* New();

    void WritePPieceAttributes(int index) {
        std::string filename = std::to_string(index);
        filename += "/test.vts";
        this->WriteStringAttribute("Source", filename.c_str());
    }
};
vtkStandardNewMacro(MyPwriter);

int main(int argc, char* argv[]) {
MPI_Init(&argc, &argv);
int myrank, nproc;
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &nproc);
vtkNew<vtkStructuredGrid> grid;
...  //populating the grid with values
vtkNew<vtkXMLStructuredGridWriter> writer;
writer->SetInputData(grid);
std::string fname = std::to_string(myrank);
fname += "/test_1." + writer->GetDefaultFileExtension();   
writer->SetFileName(fname.c_str());
writer->Write();
// Until here, everything works as expected
// Now I want to write the parallel file
if(myrank == 0) {
    vtkNew<MyPwriter> pwriter;
    std::string pfname = "test_1.";
    pfname += pwriter->GetDefaultFileExtension();
    pwriter->SetFileName(pfname.c_str());
    pwriter->SetNumberOfPieces(nproc);
    pwriter->SetStartPiece(0);
    pwriter->SetEndPiece(nproc-1);
    pwriter->SetInputData(grid);
    pwriter->Update();
}
MPI_Finalize();
return 0;
}
c++ vtk
1个回答
0
投票

我发现以下帖子从多个 MPI 输出编写 VTK 文件,涉及使用 VTK 的并行 xml 编写器进行结构化网格。然而,这篇文章尚未涉及光环和/或幽灵细胞。

为了确保在加载并行文件时正确排除这些内容,我采用了这种方法来覆盖显示片段的属性。这样做的好处是,如果需要的话,每块的鬼影/光环细胞仍然存在。

具有两个并行进程的示例如下所示: Example

注意:我还提供了将单独的部分保存在子文件夹中的选项

代码:

#include <fstream>
#include <iostream>
#include <string>
#include <mpi.h>


// VTK Library
#include <vtkCellData.h>
#include <vtkFloatArray.h>
#include <vtkMPIController.h>
#include <vtkProgrammableFilter.h>
#include <vtkNew.h>
#include <vtkStructuredGrid.h>
#include <vtkXMLPStructuredGridWriter.h>

struct Args {
    vtkProgrammableFilter* pf;
    int local_extent[6];
};

// function to operate on the point attribute data
void execute(void* arg) {
    Args* args = reinterpret_cast<Args*>(arg);
    auto output_tmp = args->pf->GetOutput();
    auto input_tmp = args->pf->GetInput();
    vtkStructuredGrid* output = dynamic_cast<vtkStructuredGrid*>(output_tmp);
    vtkStructuredGrid* input = dynamic_cast<vtkStructuredGrid*>(input_tmp);
    output->ShallowCopy(input);
    output->SetExtent(args->local_extent);
}

class CustomvtkXMLPStructuredGridWriter : public vtkXMLPStructuredGridWriter {
public:
    static CustomvtkXMLPStructuredGridWriter* New();

    void SetPPieceExtent(const int extent_l[6]) {
        int nranks{0};
        MPI_Comm_size(MPI_COMM_WORLD, &nranks);
        extent_array.resize(nranks * 6);
        MPI_Allgather(extent_l, 6, MPI_INT, extent_array.data(), 6, MPI_INT, MPI_COMM_WORLD);
    }

    void WritePPieceAttributes(int index) {
        int offset = index * 6;
        std::ostringstream os;
        os << std::to_string(extent_array[offset]);
        for (int n{1}; n < 6; n++)
            os << " " << std::to_string(extent_array[offset + n]);

        std::string piecename(this->CreatePieceFileName(index));
        this->WriteStringAttribute("Source", piecename.c_str());
        this->WriteStringAttribute("Extent", os.str().c_str());
    }

private:
    std::vector<int> extent_array;  // list of the extents of each piece
};

vtkStandardNewMacro(CustomvtkXMLPStructuredGridWriter);

int main(int argc, char* argv[]) {
    MPI_Init(&argc, &argv);
    int myrank;
    MPI_Comm_rank(MPI_COMM_WORLD, &myrank);

{
    // Create and Initialize vtkMPIController
    vtkNew<vtkMPIController> contr;
    contr->Initialize(&argc, &argv, 1);
    int nranks = contr->GetNumberOfProcesses();
    int rank = contr->GetLocalProcessId();

    int ghost_levels = 2;
    // local dimensions of the process's grid
    int lx{10}, ly{10}, lz{10};
    int global_extent[6] = {0, nranks * lx + 2* ghost_levels, 0, ly + 2* ghost_levels, 0, lz + 2* ghost_levels};
    int local_extent[6] = { myrank * lx, (myrank + 1) * lx + 2 * ghost_levels,
                            0, ly + 2 * ghost_levels,
                            0, lz + 2 * ghost_levels };
    double offset[3] = {static_cast<double>(rank * lx + ghost_levels), 0., 0.};
    lx += 2 * ghost_levels;
    ly += 2 * ghost_levels;
    lz += 2 * ghost_levels;
    int dims[3] = {lx + 1, ly + 1, lz + 1};

    // Create grid points, allocate memory and Insert them
    vtkNew<vtkPoints>
        points;
    points->Allocate(dims[0] * dims[1] * dims[2]);
    for (int k = 0; k < dims[2]; ++k)
        for (int j = 0; j < dims[1]; ++j)
            for (int i = 0; i < dims[0]; ++i) {
                int index = i + j * dims[0] + k * dims[0] * dims[1];
                double x = offset[0] + i;
                double y = offset[1] + j;
                double z = offset[2] + k;
                points->InsertPoint(index, x, y, z);
            }

    // Create a density field. Note that the number of cells is always less than
    // number of grid points by an amount of one so we use dims[i]-1
    vtkNew<vtkFloatArray> density;
    density->SetNumberOfComponents(1);
    density->SetNumberOfTuples((dims[0] - 1) * (dims[1] - 1) * (dims[2] - 1));
    density->SetName("density");
    int index;
    for (int k = 0; k < lx; ++k)
        for (int j = 0; j < ly; ++j)
            for (int i = 0; i < lx; ++i) {
                index = i + j * (dims[0] - 1) + k * (dims[0] - 1) * (dims[1] - 1);
                density->SetValue(index, rank);
            }

    // Create a vtkProgrammableFilter
    vtkNew<vtkProgrammableFilter> pf;

    // Initialize an instance of Args
    Args args;
    args.pf = pf;
    for (int i = 0; i < 6; ++i) args.local_extent[i] = local_extent[i];

    pf->SetExecuteMethod(execute, &args);

    // Create a structured grid and assign point data and cell data to it
    vtkNew<vtkStructuredGrid> structuredGrid;
    structuredGrid->SetExtent(global_extent);
    pf->SetInputData(structuredGrid);
    structuredGrid->SetPoints(points);
    structuredGrid->GetCellData()->AddArray(density);

    // remove the halo cell layers from the local extents, so they are not displayed in the root file
    int ppiece_extent[6];
    ppiece_extent[0] = rank == 0 ? 0 : local_extent[0] + ghost_levels;
    ppiece_extent[1] = rank == (nranks - 1) ? local_extent[1] : local_extent[1] - ghost_levels;
    for (int n = 2; n < 6; n++) ppiece_extent[n] = local_extent[n];

    // Create the parallel writer and call some functions
    vtkNew<CustomvtkXMLPStructuredGridWriter> parallel_writer;
    parallel_writer->SetInputConnection(pf->GetOutputPort());
    parallel_writer->SetController(contr);
    parallel_writer->SetFileName("test.pvts");
    parallel_writer->SetNumberOfPieces(nranks);
    parallel_writer->SetStartPiece(rank);
    parallel_writer->SetEndPiece(rank);
    parallel_writer->SetDataModeToBinary();
    parallel_writer->SetUseSubdirectory(true);
    parallel_writer->SetPPieceExtent(ppiece_extent);
    parallel_writer->SetGhostLevel(ghost_levels);
    parallel_writer->Update();
    parallel_writer->Write();

    contr->Finalize();
}
    // WARNING: it seems that MPI_Finalize is not necessary since we are using
    // Finalize() function from vtkMPIController class. Uncomment the following
    // line and see what happens.
    //   MPI_Finalize ();
    return 0;
}
© www.soinside.com 2019 - 2024. All rights reserved.