我可以在比点本身更大的区域上渲染点列表吗?

问题描述 投票:0回答:1

我正在渲染实际上是一个奇特的点云。每个点要占据屏幕上的多个像素(取决于深度)并且有一堆着色所需的数据。使用

primitive.topology = 'point-list'
我可以让每个点绘制在单个像素上,但我希望我的点呈现为更大的点。我可以将这些点转换为 CPU 上的三角形列表,但这意味着着色数据的大量重复,每个点只需要处理一次。是否有可能拥有一个顶点着色器而不是摄取
point-list
并发出多个片段着色器调用?一个示例方法是让顶点着色器将点转换为三角形(从而将顶点数增加 3 倍,减去剔除的点)。看看rasterize(第4点)的文档,似乎没有,但这样的功能似乎非常基本和有用,我真的无法想象它是完全不可能的。有标准的解决方法吗?

webgpu
1个回答
2
投票

您可以通过实例轻松完成此操作。

首先让我们制作一个绘制一些点的示例:

const { mat4 } = wgpuMatrix;

async function main() {
  const adapter = await navigator.gpu?.requestAdapter();
  const device = await adapter?.requestDevice();

  const canvas = document.querySelector('canvas');
  const context = canvas.getContext('webgpu');

  const presentationFormat = navigator.gpu.getPreferredCanvasFormat(adapter);
  context.configure({
    device,
    format: presentationFormat,
  });

  const shaderModule = device.createShaderModule({code: `
  struct Uniforms {
    mat: mat4x4f,
  };
  @group(0) @binding(0) var<uniform> uniforms: Uniforms;

  struct MyVSInput {
      @location(0) position: vec4f,
  };

  struct MyVSOutput {
    @builtin(position) position: vec4f,
  };

  @vertex
  fn myVSMain(v: MyVSInput) -> MyVSOutput {
    var vsOut: MyVSOutput;
    vsOut.position = uniforms.mat * v.position;
    return vsOut;
  }

  @fragment
  fn myFSMain(v: MyVSOutput) -> @location(0) vec4f {
    return vec4f(1, 1, 0, 1);
  }
  `});
  const r = (min, max) => Math.random() * (max - min) + min;

  const numPoints = 50;
  const positions = [];
  for (let i = 0; i < numPoints; ++i) {
    positions.push(r(-1, 1), r(-1, 1));
  }
  const positionData = new Float32Array(positions);
  const positionSize = 8; // 2 f32s per point

  const positionBuffer = device.createBuffer({
    usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    size: positionData.byteLength,
  });
  device.queue.writeBuffer(positionBuffer, 0, positionData);

  const pipeline = device.createRenderPipeline({
    label: 'points',
    layout: 'auto',
    vertex: {
      module: shaderModule,
      entryPoint: 'myVSMain',
      buffers: [
        // position
        {
          arrayStride: positionSize,
          attributes: [
            {shaderLocation: 0, offset: 0, format: 'float32x2' },
          ],
        },
      ],
    },
    fragment: {
      module: shaderModule,
      entryPoint: 'myFSMain',
      targets: [
        {format: presentationFormat},
      ],
    },
    primitive: {
      topology: 'point-list',
    },
  });

  const uniformBufferSize = (16) * 4;      // 1 mat4x4f
  const uniformBuffer = device.createBuffer({
    size: uniformBufferSize,
    usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
  });
  const uniformValues = new Float32Array(uniformBufferSize / 4);
  const mat = uniformValues.subarray(0, 16);

  const bindGroup = device.createBindGroup({
    layout: pipeline.getBindGroupLayout(0),
    entries: [
      { binding: 0, resource: { buffer: uniformBuffer } },
    ],
  });

  const renderPassDescriptor = {
    colorAttachments: [
      {
        // view: undefined, // Assigned later
        // resolveTarget: undefined, // Assigned Later
        clearValue: [0, 0, 0, 1],
        loadOp: 'clear',
        storeOp: 'store',
      },
    ],
  };

  const colorTexture = context.getCurrentTexture();
  renderPassDescriptor.colorAttachments[0].view = colorTexture.createView();

  // update uniforms
  mat4.identity(mat);
  device.queue.writeBuffer(uniformBuffer, 0, uniformValues);

  const commandEncoder = device.createCommandEncoder();
  const passEncoder = commandEncoder.beginRenderPass(renderPassDescriptor);
  passEncoder.setPipeline(pipeline);
  passEncoder.setVertexBuffer(0, positionBuffer);
  passEncoder.setBindGroup(0, bindGroup);
  passEncoder.draw(numPoints);
  passEncoder.end();

  device.queue.submit([commandEncoder.finish()]);
}

main();
html, body {
  background-color: #333;
}
<script src="https://wgpu-matrix.org/dist/2.x/wgpu-matrix.js"></script>;
<canvas></canvas>

现在,为了让它画得更大,我们可以在顶点着色器中放置一些四边形点并通过实例进行绘制。我们将以“像素”为单位传递每个点的大小。

首先更新着色器:

  • 我们为单位四边形制作一个由 6 个点组成的数组
  • 我们将
    resolution
    添加到制服中,以便我们可以在屏幕空间中扩展四边形
  • 我们将
    size
    添加到
    MyVSInput
    结构中以从属性获取尺寸数据
  • 我们让顶点着色器采用新的输入
    vertexIndex: u32
    ,它从
    @builtin(vertex_index)
    获取其值。我们可以用它来获取四边形顶点。
  • 我们得到一个四边形位置,将它们居中(
    - 0.5
    部分),然后转换为剪辑空间中的像素(
     * size * 2.0 / uniforms.resolution
    )部分
  • 最后将其添加到我们的顶点着色器的位置输出中。
  struct Uniforms {
    mat: mat4x4f,
    resolution: vec2f,
  };
  @group(0) @binding(0) var<uniform> uniforms: Uniforms;

  struct MyVSInput {
      @location(0) position: vec4f,
      @location(1) size: f32,
  };

  struct MyVSOutput {
    @builtin(position) position: vec4f,
  };

  @vertex
  fn myVSMain(v: MyVSInput, @builtin(vertex_index) vertexIndex: u32) -> MyVSOutput {
    let quadPos = array(
      vec2f(0, 0),
      vec2f(1, 0),
      vec2f(0, 1),
      vec2f(0, 1),
      vec2f(1, 0),
      vec2f(1, 1),
    );
    var vsOut: MyVSOutput;

    let pos = (quadPos[vertexIndex] - 0.5) * v.size * 2.0 / uniforms.resolution;

    vsOut.position = uniforms.mat * v.position + vec4f(pos, 0, 0);
    return vsOut;
  }

回到 JS,我们需要创建一个具有大小的缓冲区。这实际上与我们的持仓方式相同,因此无需详细说明。

我们需要更新管道。我们只想在每个实例中执行一次位置和大小,因此我们设置

stepMode: 'instance'

        // position
        {
          arrayStride: positionSize,
          stepMode: 'instance',
          attributes: [
            {shaderLocation: 0, offset: 0, format: 'float32x2' },
          ],
        },
        // size
        {
          arrayStride: sizeSize,
          stepMode: 'instance',
          attributes: [
            {shaderLocation: 1, offset: 0, format: 'float32'},
          ],
        },

我们也从

triangle-list
 切换到 
point-list

    primitive: {
      topology: 'triangle-list',
    },

我们增加了统一缓冲区大小,为

resolution
腾出空间,并在渲染时设置它:

      // update uniforms
      mat4.identity(mat);
      resolution.set([colorTexture.width, colorTexture.height]);
      device.queue.writeBuffer(uniformBuffer, 0, uniformValues);

最后在绘制时我们需要包含顶点缓冲区的大小:

      passEncoder.setVertexBuffer(0, positionBuffer);
      passEncoder.setVertexBuffer(1, sizeBuffer);

我们需要将

numPoints
draw
的第一个参数(num 个顶点)移动到第二个参数(num 个实例),并为第一个参数传入 6(每个四边形 6 个顶点):

      passEncoder.draw(6, numPoints);

const { mat4 } = wgpuMatrix;

async function main() {
  const adapter = await navigator.gpu?.requestAdapter();
  const device = await adapter?.requestDevice();

  const canvas = document.querySelector('canvas');
  const context = canvas.getContext('webgpu');

  const presentationFormat = navigator.gpu.getPreferredCanvasFormat(adapter);
  context.configure({
    device,
    format: presentationFormat,
  });

  const shaderModule = device.createShaderModule({code: `
  struct Uniforms {
    mat: mat4x4f,
    resolution: vec2f,
  };
  @group(0) @binding(0) var<uniform> uniforms: Uniforms;

  struct MyVSInput {
      @location(0) position: vec4f,
      @location(1) size: f32,
  };

  struct MyVSOutput {
    @builtin(position) position: vec4f,
  };

  @vertex
  fn myVSMain(v: MyVSInput, @builtin(vertex_index) vertexIndex: u32) -> MyVSOutput {
    let quadPos = array(
      vec2f(0, 0),
      vec2f(1, 0),
      vec2f(0, 1),
      vec2f(0, 1),
      vec2f(1, 0),
      vec2f(1, 1),
    );
    var vsOut: MyVSOutput;

    let pos = (quadPos[vertexIndex] - 0.5) * v.size * 2.0 / uniforms.resolution;

    vsOut.position = uniforms.mat * v.position + vec4f(pos, 0, 0);
    return vsOut;
  }

  @fragment
  fn myFSMain(v: MyVSOutput) -> @location(0) vec4f {
    return vec4f(1, 1, 0, 1);
  }
  `});
  const r = (min, max) => Math.random() * (max - min) + min;

  const numPoints = 50;
  const positions = [];
  const sizes = [];
  for (let i = 0; i < numPoints; ++i) {
    positions.push(r(-1, 1), r(-1, 1));
    sizes.push(r(5, 20));
  }
  const positionData = new Float32Array(positions);
  const positionSize = 8; // 2 f32s per point
  const sizeData = new Float32Array(sizes);
  const sizeSize = 4 // 1 f32 per point

  const positionBuffer = device.createBuffer({
    usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    size: positionData.byteLength,
  });
  device.queue.writeBuffer(positionBuffer, 0, positionData);
  const sizeBuffer = device.createBuffer({
    usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
    size: sizeData.byteLength,
  });
  device.queue.writeBuffer(sizeBuffer, 0, sizeData);

  const pipeline = device.createRenderPipeline({
    label: 'points',
    layout: 'auto',
    vertex: {
      module: shaderModule,
      entryPoint: 'myVSMain',
      buffers: [
        // position
        {
          arrayStride: positionSize,
          stepMode: 'instance',
          attributes: [
            {shaderLocation: 0, offset: 0, format: 'float32x2' },
          ],
        },
        // size
        {
          arrayStride: sizeSize,
          stepMode: 'instance',
          attributes: [
            {shaderLocation: 1, offset: 0, format: 'float32'},
          ],
        },
      ],
    },
    fragment: {
      module: shaderModule,
      entryPoint: 'myFSMain',
      targets: [
        {format: presentationFormat},
      ],
    },
    primitive: {
      topology: 'triangle-list',
    },
  });

  const uniformBufferSize = (16 + 2 + 2) * 4;      // 1 mat4x4f + 2 f32 + 2 padding
  const uniformBuffer = device.createBuffer({
    size: uniformBufferSize,
    usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
  });
  const uniformValues = new Float32Array(uniformBufferSize / 4);
  const mat = uniformValues.subarray(0, 16);
  const resolution = uniformValues.subarray(16, 18);

  const bindGroup = device.createBindGroup({
    layout: pipeline.getBindGroupLayout(0),
    entries: [
      { binding: 0, resource: { buffer: uniformBuffer } },
    ],
  });

  const renderPassDescriptor = {
    colorAttachments: [
      {
        // view: undefined, // Assigned later
        // resolveTarget: undefined, // Assigned Later
        clearValue: [0, 0, 0, 1],
        loadOp: 'clear',
        storeOp: 'store',
      },
    ],
  };

  const colorTexture = context.getCurrentTexture();
  renderPassDescriptor.colorAttachments[0].view = colorTexture.createView();

  // update uniforms
  mat4.identity(mat);
  resolution.set([colorTexture.width, colorTexture.height]);
  device.queue.writeBuffer(uniformBuffer, 0, uniformValues);

  const commandEncoder = device.createCommandEncoder();
  const passEncoder = commandEncoder.beginRenderPass(renderPassDescriptor);
  passEncoder.setPipeline(pipeline);
  passEncoder.setVertexBuffer(0, positionBuffer);
  passEncoder.setVertexBuffer(1, sizeBuffer);
  passEncoder.setBindGroup(0, bindGroup);
  passEncoder.draw(6, numPoints);
  passEncoder.end();

  device.queue.submit([commandEncoder.finish()]);
}

main();
html, body {
  background-color: #333;
}
<script src="https://wgpu-matrix.org/dist/2.x/wgpu-matrix.js"></script>
<canvas></canvas>

我使用了值为 0 到 1 的单位四边形,因为您可能希望将它们作为级间变量传递到片段着色器中,以便您可以对四边形进行着色,例如使用纹理示例带旋转


如果您好奇为什么没有内置此功能,通常是因为 API 和驱动程序中的点始终存在问题。仅以OpenGL为例。 OpenGL 支持大小点,但如果它们支持大于或小于 1 像素的任何大小,则由驱动程序决定。 Core OpenGL 规范甚至要求它们为 1(而兼容性规范则不然)。有些驱动程序的限制为 1,有些为 64,有些为 256,有些则没有限制。此外,如果点的中心位于屏幕之外,某些 GPU 将不会绘制该点。其他人会画出未被剪裁的部分。所有这些意味着使用大小 > 1 的点渲染是不可移植的。

我认为 WebGPU 委员会决定,他们只是决定将点限制为 1 像素(这是可移植的),并要求您做其他事情,而不是传递所有不可移植的问题,这对网络不利如果你想要更大的点(这最终也将是便携式的)。

© www.soinside.com 2019 - 2024. All rights reserved.