Python wechat ordering applet course video
https://edu.csdn.net/course/detail/36074
Python actual combat quantitative transaction financial management system
https://edu.csdn.net/course/detail/35475
1. What is buffer mapping
It is not defined. To put it simply, a piece of video memory after Mapping can be accessed by the CPU.
After the Buffer of the three graphics API s (D3D12, Vulkan and Metal) is mapped, the CPU can access it. At this time, note that the GPU can still access this video memory. This will lead to a problem: IO conflict, which requires the program to consider this problem.
WebGPU prohibits this behavior and uses "ownership" to represent the mapped state, which is quite the philosophy of Rust. At every moment, CPU and GPU access video memory unilaterally, which avoids competition and conflict.
When JavaScript requests to map the video memory, the ownership will not be transferred to the CPU immediately. At this time, the GPU may have other operations to deal with the video memory. Therefore, the mapping method of GPUBuffer is an asynchronous method:
| | const someBuffer = device.createBuffer({ /* ... */ }) | | | await someBuffer.mapAsync(GPUMapMode.READ, 0, 4) // Starting from 0, only 4 bytes are mapped| | | | | | // Then you can use the getMappedRange method to obtain its corresponding ArrayBuffer for buffer operation|
However, demapping is a synchronous operation. You can demap after the CPU runs out:
| | somebuffer.unmap() |
Note that the mapAsync method will directly push an operation into the default queue of the device inside the WebGPU. This method acts on the queue timeline among the three timelines in the WebGPU. Moreover, the memory will not increase until mapAsync is successful (measured).
After the instruction buffer is submitted to the queue (a rendering channel buffered by this instruction needs to use this gpubbuffer), the data in memory will be submitted to the GPU (guess).
Because there are few tests, I didn't see a significant decrease in memory after calling the destroy method. I hope a friend can test it.
Mapping on Creation
You can pass mappedAtCreation: true when creating the buffer, so you don't even need to declare its usage with gpubufferuage MAP_ WRITE
| | const buffer = device.createBuffer({ | | | usage: GPUBufferUsage.UNIFORM, | | | size: 256, | | | mappedAtCreation: true, | | | }) | | | // Then you can get the mapped ArrayBuffer immediately| | | const mappedArrayBuffer = buffer.getMappedRange() | | | | | | /* Perform some write operations here */ | | | | | | // Demapping and giving the management right to GPU| | | buffer.unmap() |
2 flow direction of buffered data
2.1 CPU to GPU
JavaScript will frequently transfer a large amount of data to the ArrayBuffer mapped by GPUBuffer in rAF, then buffer it to the queue with demapping and submitting instructions, and finally transfer it to GPU
The most common examples mentioned above are the VertexBuffer, UniformBuffer required for transmitting each frame and the StorageBuffer required for computing channels.
Using the writeBuffer method of the queue object to write the buffer object is very efficient, but writeBuffer has an additional copy operation compared with a mapped gpubbuffer used to write. It is speculated that the performance will be affected. Although there are many writeBuffer operations in the officially recommended examples, most of them are used for the update of UniformBuffer.
2.2 GPU to CPU
Such reverse transmission is relatively few, but it is not without. For example, screenshots (save color attachments to ArrayBuffer) and result statistics of calculation channels need to obtain data from the calculation results of GPU.
For example, the official example of obtaining pixel data from rendered texture:
| | const texture = getTheRenderedTexture() | | | | | | const readbackBuffer = device.createBuffer({ | | | usage: GPUBufferUsage.COPY\_DST | GPUBufferUsage.MAP\_READ, | | | size: 4 * textureWidth * textureHeight, | | | }) | | | | | | // Use the instruction encoder to copy the texture to gpubbuffer| | | const encoder = device.createCommandEncoder() | | | encoder.copyTextureToBuffer( | | | { texture }, | | | { buffer, rowPitch: textureWidth * 4 }, | | | [textureWidth, textureHeight], | | | ) | | | device.submit([encoder.finish()]) | | | | | | // Mapping, so that the memory on the CPU side can access data| | | await buffer.mapAsync(GPUMapMode.READ) | | | // Save screenshot| | | saveScreenshot(buffer.getMappedRange()) | | | // Demapping| | | buffer.unmap() |