我一直在寻找不同的资源,似乎不可能从存储纹理中读取然后写回它。这就是为什么我尝试使用 2 个单独的纹理:一个作为
texture_storage_2d<rgba16float, write>
,另一个作为 texture_2d<f32>
。但是,当我在同一个(甚至是两个单独的)绑定组中使用它们时,我收到了这个错误,我不太明白:
[Texture "Framebuffer Texture"] usage (TextureUsage::
(TextureBinding|TextureUsage::8000000)) includes writable usage and another usage in the same synchronization scope.
- While validating compute pass usage.
[Invalid CommandBuffer] is invalid.
- While calling [Queue].Submit([[Invalid CommandBuffer]])
根据这个示例,即使评论报告了可能类似的错误,也应该有可能..?
在另一个示例中,
array<f32>
用作存储缓冲区,然后用作从中读取像素数据信息的输入。
在 webgpu-samples 中,
array<atomic<u32>>
似乎用于相同的目的,尽管由于某种原因它看起来要复杂得多。
此外,根据 webgpufundamentals blog post,有一种方法可以拥有
read_write
存储纹理,但只能使用 3 种格式 r32*
,我无法找到任何示例。
我现在很困惑,所以希望有人能为整个读写纹理业务带来一些清晰度。任何帮助将不胜感激,谢谢!无论如何,这就是我目前正在做的事情:
const framebuffer = this.device.createTexture({
size: [Config.width, Config.height],
label: 'Framebuffer Texture',
format: 'rgba16float',
usage:
GPUTextureUsage.RENDER_ATTACHMENT |
GPUTextureUsage.STORAGE_BINDING |
GPUTextureUsage.TEXTURE_BINDING
});
const computeBindGroupLayout = this.device.createBindGroupLayout({
label: 'Compute Bind Group Layout',
entries: [{
binding: 0,
visibility: GPUShaderStage.COMPUTE,
storageTexture: { format: framebuffer.format }
}, {
binding: 1,
texture: {},
visibility: GPUShaderStage.COMPUTE
}]
});
this.computeBindGroup = this.device.createBindGroup({
layout: computeBindGroupLayout,
label: 'Compute Bind Group',
entries: [{
binding: 0,
resource: framebuffer.createView()
}, {
binding: 1,
resource: framebuffer.createView()
}]
});
// Workgroup size:
override size: u32;
@group(0) @binding(0)
var framebuffer: texture_storage_2d<rgba16float, write>;
@group(0) @binding(1)
var texture: texture_2d<f32>;
@compute @workgroup_size(size, size)
fn mainCompute(@builtin(global_invocation_id) globalInvocation: vec3u)
{
let color = textureLoad(texture, globalInvocation.xy, 0);
textureStore(framebuffer, globalInvocation.xy, color);
}
这就是我想要在片段着色器中对输出纹理进行采样的方式 (当未传递
texture
且未使用 textureLoad
时,与帧缓冲区配合良好):
@group(0) @binding(0) var texture: texture_2d<f32>;
@group(0) @binding(1) var textureSampler: sampler;
@fragment
fn mainFragment(@location(0) coords: vec2f) -> @location(0) vec4f
{
return textureSample(texture, textureSampler, coords);
}
GPUCommandEncoder:copyTextureToTexture()
方法成功实现了这一点。所以在设置时它看起来像这样:
this.framebuffer = this.device.createTexture({
size: [Config.width, Config.height],
label: 'Framebuffer Texture',
format: 'rgba16float',
usage:
GPUTextureUsage.RENDER_ATTACHMENT |
GPUTextureUsage.STORAGE_BINDING |
GPUTextureUsage.TEXTURE_BINDING |
GPUTextureUsage.COPY_SRC
});
this.imageTexture = this.device.createTexture({
size: [Config.width, Config.height],
label: 'GPU Computed Image',
format: 'rgba16float',
usage:
GPUTextureUsage.RENDER_ATTACHMENT |
GPUTextureUsage.TEXTURE_BINDING |
GPUTextureUsage.COPY_DST
});
const computeBindGroupLayout = this.device.createBindGroupLayout({
label: 'Compute Bind Group Layout',
entries: [{
binding: 0,
visibility: GPUShaderStage.COMPUTE,
storageTexture: { format: this.framebuffer.format }
}, {
binding: 1,
buffer: { type: 'uniform' },
visibility: GPUShaderStage.COMPUTE
}, {
binding: 2,
texture: {},
visibility: GPUShaderStage.COMPUTE
}, {
binding: 3,
sampler: {},
visibility: GPUShaderStage.COMPUTE
}]
});
this.computeBindGroup = this.device.createBindGroup({
layout: computeBindGroupLayout,
label: 'Compute Bind Group',
entries: [{
binding: 0,
resource: this.framebuffer.createView()
}, {
binding: 1,
resource: { buffer: this.tracerUniformBuffer }
}, {
binding: 2,
resource: this.imageTexture.createView()
}, {
binding: 3,
resource: this.sampler
}]
});
在渲染循环中:
const commandEncoder = this.device.createCommandEncoder();
const computePass = commandEncoder.beginComputePass();
// Update uniforms, set pipeline, set bind group and dispatch workgroups...
computePass.end();
const { width, height } = Config;
commandEncoder.copyTextureToTexture(
{ texture: this.framebuffer },
{ texture: this.imageTexture },
{ width, height }
);
然后在计算着色器中:
// Workgroup size:
override size: u32;
@group(0) @binding(0)
var framebuffer: texture_storage_2d<rgba16float, write>;
@group(0) @binding(1)
var<uniform> tracerUniform: TracerUniform;
@group(0) @binding(2)
var texture: texture_2d<f32>;
@group(0) @binding(3)
var textureSampler: sampler;
@compute @workgroup_size(size, size)
fn mainCompute(@builtin(global_invocation_id) globalInvocation: vec3u)
{
var color = textureLoad(texture, globalInvocation.xy, 0).rgb;
// color = doSomeMagic(color);
textureStore(framebuffer, globalInvocation.xy, vec4f(color, 1.0));
}
所以甚至不必更改我的片段着色器,它仍然从同一帧缓冲区读取,更新每一帧。但我不知道……我感觉不太对劲。我的意思是
commandEncoder.copyTextureToTexture
非常方便,但是这会不会比在计算着色器中手动复制相同操作慢,并且不要每帧都对 commandEncoder
进行排队..?