Python wechat ordering applet course video
https://edu.csdn.net/course/detail/36074
Python actual combat quantitative transaction financial management system
https://edu.csdn.net/course/detail/35475
Catalogue* 1 FBO and RBO in webgl
+ 1.1 FramebufferObject
+ 1.2 true carrier of color accessories and depth template accessories
+ 1.3 collection of FBO / RBO / webgltexture related methods
OpenGL system has left a lot of technical accumulation for graphics development, including many "buffers", such as vertex Buffer object (VBO), frame Buffer object (FBO), etc.
After switching to WebGPU based on the three modern graphics development technology systems, these classic buffer objects "disappeared" in the API. In fact, their functions are more scientifically dispersed to the new API.
This article will talk about FBO and RBO, which are usually used in off screen rendering logic, and why there are no these two API s after arriving at WebGPU (what is used as a substitute).
1 FBO and RBO in webgl
WebGL is actually more of a drawing API, so in GL When the drawarrays function is issued, you must determine where to draw the data resources.
WebGL allows drawArrays to any one of two places: canvas or FramebufferObject Many materials have introduced that canvas has a default frame buffer. If the frame buffer object created by yourself is not explicitly specified (or specified as null), it will be drawn to the frame buffer of canvas by default.
In other words, just use GL The bindframebuffer() function specifies a frame buffer object created by itself, so it will not be drawn on the canvas.
This article discusses the htmlcanvas element, not the offscreen canvas
1.1 FramebufferObject
FBO is easy to create. Most of the time, it is a leader in charge of roll call. The sweating people are younger brothers, that is, the two types of accessories under its jurisdiction:
- Color attachments (1 in WebGL1 and 16 in WebGL 2)
- Depth template attachment (can only use depth, template or both)
For MRT Technology (MultiRenderTarget), which allows output to multiple color attachments, WebGL 1.0 uses GL Getextension ('webgl_draw_buffers') gets the extension for use; WebGL 2.0 is natively supported, so the number of color attachments is different.
These two types of attachments are set through the following API:
| | // Set texture to No. 0 color attachment| | | gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR\_ATTACHMENT0, gl.TEXTURE\_2D, color0Texture, 0) | | | // Set rbo to color 0 attachment| | | gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.COLOR\_ATTACHMENT0, gl.RENDERBUFFER, color0Rbo) | | | | | | // Set texture to depth attachment only| | | gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH\_ATTACHMENT, gl.TEXTURE\_2D, depthTexture, 0) | | | // Set rbo as depth template attachment (WebGL2 or webgl \ _depth \ _textureis required)| | | gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH\_STENCIL\_ATTACHMENT, gl.RENDERBUFFER, depthStencilRbo) |
In fact, when MRT is required, GL COLOR_ ATTACHMENT0,gl.COLOR_ATTACHMENT1... These attributes are just numbers. You can index the location of color attachments by calculating attributes, or you can directly use explicit numbers instead:
| | console.log(gl.COLOR\_ATTACHMENT0) // 36064 | | | console.log(gl.COLOR\_ATTACHMENT1) // 36065 | | | | | | let i = 1 | | | console.log(gl[`COLOR\_ATTACHMENT${i}`]) // 36065 |
1.2 true carrier of color accessories and depth template accessories
Color attachment and depth template attachment need to specify the data carrier clearly. If WebGL instead draws the drawing results to the FBO of non canvas, it needs to specify where the specific drawing is.
As shown in the example code in Section 1.1, each attachment can choose one of the following two as the real data carrier container:
- Render buffer object (WebGLRenderbuffer)
- Texture object (WebGLTexture)
Some predecessors pointed out in their blog that rendering buffer objects will be slightly better than texture objects, but specific problems should be analyzed.
In fact, these performance differences are less important on most modern GPU s and graphics card drivers.
In short, if the result of off screen painting does not need to be used as texture map in the next painting, RBO can be used, because only texture objects can be passed to shaders.
There is not much information about the difference between RBO and texture as two types of attachments, and this article is mainly compared with the difference between WebGL and WebGPU, so it will not be expanded.
1.3 collection of FBO / RBO / webgltexture related methods
- gl.framebufferTexture2D(gl.FRAMEBUFFER,,): Associate WebGLTexture to an attachment of FBO
- gl. Framebuffer renderbuffer (GL. Framebuffer, GL. Renderbuffer,): Associate RBO to an attachment of FBO
- gl.bindFramebuffer(gl.FRAMEBUFFER,): sets the frame buffer object as the current rendering target
- gl.bindRenderbuffer(gl.RENDERBUFFER,): bind to the current RBO
- gl. Renderbufferstorage (GL. Renderbuffer, width, height): sets the data format and length and width of the currently bound RBO
Here are three ways to create:
- gl.createFramebuffer()
- gl.createRenderbuffer()
- gl.createTexture()
Review the parameter settings of texture binding and texture transfer function:
- gl.texParameteri(): sets the parameters of the currently bound texture object
- gl.bindTexture(): the bound texture object is the current active texture
- gl. Textimage2d(): pass data to the currently bound texture object. The last parameter is data
2. Peer to peer concept in webgpu
WebGPU doesn't have similar API s like WebGLFramebuffer and WebGLRenderbuffer, that is, you can't find WebGPUFramebuffer and WebGPURenderbuffer.
But, GL There are still peer-to-peer operations of drawarray, that is, the renderPassEncoder issued by the rendering channel encoder (make it renderPassEncoder) Draw action.
2.1 the GPURenderPassEncoder assumes the function of FBO
Where is the drawing target of WebGPU? Since the WebGPU is not strongly associated with the canvas element, you must explicitly specify where to draw.
By studying the concepts of programmable channel and instruction coding, we understand that WebGPU transmits "what I'm going to do" information to GPU through some instruction buffers, and the instruction buffer is created by the instruction encoder (i.e. GPU command encoder). The instruction buffer is composed of several Pass (channels), and the drawing related channel is called rendering channel.
The rendering channel is set by the rendering channel encoder. A rendering channel sets where the rendering result of this channel should be placed (this description is similar to where WebGL should be drawn). Specifically in the code, it is actually the colorAttachments attribute in the GPURenderPassDescriptor parameter object passed when creating renderPassEncoder:
| | const renderPassEncoder = commandEncoder.beginRenderPass({ | | | // Is an array that can set multiple color attachments| | | colorAttachments: [ | | | { | | | view: textureView, | | | loadValue: { r: 0.0, g: 0.0, b: 0.0, a: 1.0 }, | | | storeOp: 'store', | | | } | | | ] | | | }) |
Notice that colorattachments [0] View is a textureView, that is, GPUTextureView. In other words, it means that this rendering channel needs to be painted on a texture object.
If you need to draw the texture from the canvas or from the canvas, you should use the following operation:
| | const context = canvas.getContext('webgpu') | | | context.configure({ | | | gpuDevice, | | | format: presentationFormat, // This parameter can use the client length and width of the canvas × The device pixel scaling ratio is obtained, which is an array of two elements| | | size: presentationSize, // This parameter can call context Getpreferredformat (gpuadapter)| | | }) | | | | | | const textureView = context.getCurrentTexture().createView() |
The above code fragment completes the association between the rendering channel and the screen canvas, that is, the canvas is regarded as a GPUTexture, and its GPUTextureView is used to associate with the rendering channel.
In fact, a more rigorous statement is that the rendering channel undertakes some functions of FBO (because the rendering channel also has the function of sending other actions, such as setting pipelines). Because there is no API of GPURenderPass, we can only grievance GPURenderPassEncoder to replace it.
2.2 multi-objective rendering
In order to carry out multi-objective rendering, that is, when the slice shader needs to output multiple results (the code shows that it returns a structure), it means that multiple color attachments are needed to carry the output of rendering.
At this point, you want to configure the targets attribute of the fragment shading phase of the rendering pipeline.
The relevant example codes from texture creation, pipeline creation and instruction coding are as follows. Two texture objects are used as containers for color attachments:
| | // 1, Create render target textures 1 and 2 and their corresponding texture view objects| | | const renderTargetTexture1 = device.createTexture({ | | | size: [/* slightly */], | | | usage: GPUTextureUsage.RENDER\_ATTACHMENT | GPUTextureUsage.TEXTURE\_BINDING, | | | format: 'rgba32float', | | | }) | | | const renderTargetTexture2 = device.createTexture({ | | | size: [/* slightly */], | | | usage: GPUTextureUsage.RENDER\_ATTACHMENT | GPUTextureUsage.TEXTURE\_BINDING, | | | format: 'bgra8unorm', | | | }) | | | const renderTargetTextureView1 = renderTargetTexture1.createView() | | | const renderTargetTextureView2 = renderTargetTexture2.createView() | | | | | | // 2, Create a pipeline and configure the texture output format of multiple corresponding targets in the slice shading stage| | | const pipeline = device.createRenderPipeline({ | | | fragment: { | | | targets: [ | | | { | | | format: 'rgba32float' | | | }, | | | { | | | format: 'bgra8unorm' | | | } | | | ] | | | // ... Other attributes are omitted| | | }, | | | // ... Other stages omitted| | | }) | | | | | | const renderPassEncoder = commandEncoder.beginRenderPass({ | | | colorAttachments: [ | | | { | | | view: renderTargetTextureView1, | | | // ... Other parameters| | | }, | | | { | | | view: renderTargetTextureView2, | | | // ... Other parameters| | | } | | | ] | | | }) |
In this way, the two color attachments use two texture view objects as rendering targets respectively, and the formats of the two targets are clearly specified in the slice shading stage of the pipeline object.
Thus, you can specify the output structure in the slice shader code:
| | struct FragmentStageOutput { | | | @location(0) something: vec4; | | | @location(1) another: vec4; | | | } | | | | | | @stage(fragment) | | | fn main(/* Omit input */) -> FragmentStageOutput { | | | var output: FragmentStageOutput; | | | // It doesn't make sense to write two numbers at random| | | output.something = vec4(0.156); | | | output.another = vec4(0.67); | | | | | | return output; | | | } |
In this way, the f32 four-dimensional vector of something at location 0 writes a texture element of renderTargetTexture1, while the f32 four-dimensional vector of another at location 1 writes a texture element of renderTargetTexture2.
Although the format specified by target is slightly different in the slice stage of pipeline, that is, renderTargetTexture2 is specified as' bgra8unorm ', and the location data type of structure 1 in shader code is vec4, WebGPU will help you map the output within the [0.0f, 1.0f] range of f32 to the 8-bit integer interval of [0, 255].
In fact, if there is no multi output (i.e. multi-objective rendering), the return type of most slice shaders in WebGPU is a single vec4, and the most common best texture format of canvas is bgra8unorm. After all, the mapping process of [0.0f, 1.0f] is enlarged by 255 times and rounded to [0, 255].
2.3 depth accessories and formwork accessories
GPURenderPassDescriptor also supports passing in depthStencilAttachment as an attachment to the depth template. The code example is as follows:
| | const renderPassDescriptor = { | | | // The color attachment setting is omitted| | | depthStencilAttachment: { | | | view: depthTexture.createView(), | | | depthLoadValue: 1.0, | | | depthStoreOp: 'store', | | | stencilLoadValue: 0, | | | stencilStoreOp: 'store', | | | } | | | } |
Similar to a single color attachment, the view object of a texture object needs to be view. In particular, as a depth or template attachment, the texture format related to depth and template must be set.
If the texture format of depth and template is in the additional Device feature, the corresponding feature must be added when requesting the device object. For example, only with the function of "depth24unorm-stencil8" can the texture format of "depth24unorm-stencil8" be used.
For the calculation of depth template, attention should also be paid to the configuration of depth template stage parameter objects in the rendering pipeline, for example:
| | const renderPipeline = device.createRenderPipeline({ | | | // ... | | | depthStencil: { | | | depthWriteEnabled: true, | | | depthCompare: 'less', | | | format: 'depth24plus', | | | } | | | }) |
2.4 attention points of non canvas texture objects as two attachments
In addition to the texture format mentioned in the depth template attachment and the feature of the requesting device, it should also be noted that if a non canvas texture is used as an attachment, its usage must include RENDER_ATTACHMENT.
| | const depthTexture = device.createTexture({ | | | size: presentationSize, | | | format: 'depth24plus', | | | usage: GPUTextureUsage.RENDER\_ATTACHMENT, | | | }) | | | | | | const renderColorTexture = device.createTexture({ | | | size: presentationSize, | | | format: presentationFormat, | | | usage: GPUTextureUsage.RENDER\_ATTACHMENT | GPUTextureUsage.COPY\_SRC, | | | }) |
3 read data
3.1 reading pixel values from FBO
Reading pixel values from fbo is actually reading the color data of the color attachment into TypedArray. To read the results of the current fbo (or the frame buffer of canvas), just call GL Just use the readpixels method.
| | //#region creates fbo and sets it as the render target container| | | const fb = gl.createFramebuffer(); | | | gl.bindFramebuffer(gl.FRAMEBUFFER, fb); | | | //#endregion | | | | | | //#region creates a container for off screen painting: a texture object, and binds it to become the texture object currently to be processed| | | const texture = gl.createTexture(); | | | gl.bindTexture(gl.TEXTURE\_2D, texture); | | | | | | // --If you don't need to be sampled by the shader again as a texture, RBO can be used instead| | | //#endregion | | | | | | //#region bind texture object to color 0 attachment| | | gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR\_ATTACHMENT0, gl.TEXTURE\_2D, texture, 0); | | | //#endregion | | | | | | // ... gl.drawArrays for rendering| | | | | | //#region read TypedArray| | | const pixels = new Uint8Array(imageWidth * imageHeight * 4); | | | gl.readPixels(0, 0, imageWiebdth, imageHeight, gl.RGBA, gl.UNSIGNED\_BYTE, pixels); | | | //#endregion |
gl. The readpixels () method reads the pixel values of the currently bound FBO and the currently bound color attachment into TypedArray, whether the carrier is WebGLRenderbuffer or WebGLTexture
The only thing to note is that if you are writing to the engine, the operation of reading pixels must be written in the code after the drawing instruction (generally refers to gl.drawArrays or gl.drawElements) is issued, otherwise the value may not be read.
3.2 WebGPU reads data in GPUTexture
It is relatively simple to access pixels in the texture in the WebGPU. Use the copyTextureToBuffer method of the instruction encoder to read the data of the texture object into the gpubbuffer, and then obtain the ArrayBuffer by demapping and reading range
| | //#region creates a texture object associated with a color attachment| | | const colorAttachment0Texture = device.createTexture({ /* ... */ }) | | | //#endregion | | | | | | //#region creates a buffer object to hold texture data| | | const readPixelsResultBuffer = device.createBuffer({ | | | usage: GPUBufferUsage.COPY\_DST | GPUBufferUsage.MAP\_READ, | | | size: 4 * textureWidth * textureHeight, | | | }) | | | //#endregion | | | | | | //#region image copy operation, copy GPUTexture to gpubbuffer| | | const encoder = device.createCommandEncoder() | | | encoder.copyTextureToBuffer( | | | { texture: colorAttachment0Texture }, | | | { buffer: readPixelsResultBuffer }, | | | [textureWidth, textureHeight], | | | ) | | | device.queue.submit([encoder.finish()]) | | | //#endregion | | | | | | //#region read pixel| | | await readPixelsResultBuffer.mapAsync() | | | const pixels = new Uint8Array(readPixelsResultBuffer.getMappedRange()) | | | //#endregion |
It should be noted that if you want to copy to the GPUBuffer and give it to the CPU (that is, JavaScript) to read, the usage of this GPUBuffer must have COPY_DST and MAP_READ; Moreover, the usage of this texture object must also have COPY_SRC (as the associated texture of color attachment, it must also have the usage of RENDER_ATTACHMENT).
4 Summary
From WebGL (i.e. OpenGL ES system) to WebGPU, off screen rendering technology and multi-objective rendering technology have been upgraded in interface and usage.
The first is to cancel the concept of RBO and use Texture as the drawing target.
Secondly, the authority of FBO is replaced to RenderPass, and GPURenderPassEncoder is responsible for carrying the two types of attachments of the original FBO.
Because the RBO concept is cancelled, RTT(RenderToTexture) and RTR(RenderToRenderbuffer) no longer exist, but off screen rendering technology still exists. You can use multiple RenderPass to complete multiple rendering results in WebGPU, As a drawing carrier, Texture can freely shuttle through resource binding groups in a RenderPipeline of different RenderPass.
Section 3 also discusses how to read pixels (color values) from GPU texture. Most of the purposes of this part are GPU Picking; As for the legacy concept of FBO, RenderPass off screen rendering is the most common one.