Due to the limited intensity range of monitor, bright light sources and bright areas are usually difficult to convey to the viewer. One way to distinguish bright light sources on a display is to make them glow; Then the light flows around the light source. This effectively provides the viewer with the illusion that these light sources or bright areas are very bright.
This halo or glow effect is achieved through a post-processing effect called bloom. Bloom makes all bright areas of the scene glow like. The following is an example of a scene with and without light (the picture is provided by Epic Games):
Bloom provides obvious visual clues about the brightness of objects. When done in a subtle way (some games can't do it at all), bloom will significantly enhance the lighting of the scene and allow a large number of dramatic effects.
Bloom works best with HDR rendering. A common misconception is that HDR is the same as bloom because many people can use the two terms interchangeably. However, they are completely different technologies for different purposes. Bloom can be implemented using the default 8-bit precision frame buffer, just as HDR can be used without bloom effect. Quite simply, HDR makes bloom more efficient to implement (we'll see later).
In order to realize Bloom, we render a bright scene as usual, and extract the HDR color buffer of the scene and the scene image visible only in its bright area. Then, the extracted brightness image is blurred, and the result is added to the original HDR scene image.
Let's illustrate this process step by step. We rendered a scene full of four bright lights and visualized it as a color cube. The luminance value of the color light cube is between 1.5 and 15.0. If we render it to the HDR color buffer, the scene is as follows:
We use this HDR color buffer texture to buffer the color texture and extract all segments exceeding a certain brightness. This provides us with an image that shows only bright areas because their fragment intensity exceeds a certain threshold: (bloom is required only when the light intensity exceeds a certain value)
Then we use this threshold luminance texture and blur the result. The intensity of the bloom effect depends largely on the range and intensity of the blur filter used.
The resulting fuzzy texture is used to obtain luminous or streamer effects. This fuzzy texture is added to the original HDR scene texture. Due to the effect of the blur filter, the bright area is expanded in width and height, so the bright area of the scene appears to emit or exude light.
Bloom itself is not a complex technology, but it is difficult to be completely correct. Most of its visual quality depends on the quality and type of blur filter used to blur the brightness region extracted. Simply adjusting the fuzzy filter can greatly change the quality of the bloom effect.
Following these steps provides us with Bloom post-processing effect. The following figure briefly summarizes the steps required to implement bloom:
The first step requires us to extract all the bright colors of the scene according to a certain threshold. Let's delve into it first.
1.Extracting bright color
The first step requires us to extract two images (one normal and one bright color) from the rendered scene. We can render the scene twice and use different shaders to render to different frame buffers, but we can also use a clever trick called Multiple Render Targets (MRT), which allows us to specify multiple fragment shader outputs; This allows us to choose to extract the first two images (one normal, one bright color, output to two fragments respectively) in a single render channel. By specifying the layout position specifier before the output of the clip shader, we can control which color buffer the clip shader writes to:
layout (location = 0) out vec4 FragColor; layout (location = 1) out vec4 BrightColor;
This is only valid if we actually have multiple buffers to write to. As a requirement to use multiple clip shader outputs, we need to attach multiple color buffers to the currently bound framebuffer object. You may recall that in the framebuffer chapter, we can specify the color attachment number when linking textures to the color buffer of the framebuffer. So far, we have been using GL_COLOR_ATTACHMENT0, but by using GL_COLOR_ATTACHMENT1, we can attach two color buffers to the framebuffer object:
// set up floating point framebuffer to render scene to unsigned int hdrFBO; glGenFramebuffers(1, &hdrFBO); glBindFramebuffer(GL_FRAMEBUFFER, hdrFBO); unsigned int colorBuffers[2]; glGenTextures(2, colorBuffers); for (unsigned int i = 0; i < 2; i++) { glBindTexture(GL_TEXTURE_2D, colorBuffers[i]); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_FLOAT, NULL ); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); // attach texture to framebuffer // Bind different framebuffer s respectively glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, colorBuffers[i], 0 ); }
We must explicitly tell OpenGL that we are rendering to multiple color buffers through glDrawBuffers. By default, OpenGL renders only the first color attachment to the framebuffer, ignoring all other color attachments. We can do this by passing the color attachment enumeration array we want to render in subsequent operations:
//Initializes an array of 2 elements unsigned int attachments[2] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 }; // Bind this array glDrawBuffers(2, attachments);
When rendering to this frame buffer, whenever the clip shader uses the layout position specifier, the corresponding color buffer is used to render the clip. This is great because it saves us extra render channels for extracting bright areas, because we can now extract them directly from the clips to be rendered:
#version 330 core layout (location = 0) out vec4 FragColor; layout (location = 1) out vec4 BrightColor; [...] void main() { [...] // first do normal lighting calculations and output results FragColor = vec4(lighting, 1.0); // check whether fragment output is higher than threshold, if so output as brightness color // Dot product is the multiplication and addition of each sub term of the vector. vec3(0.2126, 0.7152, 0.0722) represents a threshold float brightness = dot(FragColor.rgb, vec3(0.2126, 0.7152, 0.0722)); // See if the value from the dot product is greater than 1. 0 if(brightness > 1.0) BrightColor = vec4(FragColor.rgb, 1.0); else BrightColor = vec4(0.0, 0.0, 0.0, 1.0); }
Here, we first calculate the illumination as usual and pass it to the output variable FragColor of the first fragment shader. Then we use the content currently stored in FragColor to determine whether its brightness exceeds a certain threshold. We first calculate the brightness of the segment by appropriately converting the segment to gray (by taking the dot product of the two vectors, we effectively multiply each individual component of the two vectors and add the results together). If the brightness exceeds a certain threshold, we output the color to the second color buffer. We do the same thing with the light cube.
This also explains why Bloom performs very well in HDR rendering. Because we render in a high dynamic range, the color value can exceed 1.0, which allows us to specify a brightness threshold outside the default range, allowing us to better control what is considered bright. Without HDR, we must set the threshold below 1.0, which is still possible, but the area is considered to be bright much faster. This sometimes causes the glow effect to become too prominent (for example, think of white glowing snow).
With these two color buffers, we have an image of a normal scene and an image of an extracted bright area; All are generated in a single render channel.
With the image of the extracted bright region, we now need to blur the image. We can use a simple "box filter" to do this, as we did in the post-processing section of the frame buffer chapter, but we prefer to use a more advanced (better looking) blur filter called Gaussian blur Gaussian blur
2.Gaussian blur
In the post-processing chapter, we take the average value of all pixels around the image. Although it does make it easy for us to blur, it does not give the best results. Gaussian blur is based on Gaussian curve, which is usually described as a bell curve, gives a high value near its center, and gradually disappears with the increase of distance. Gaussian curves can be expressed mathematically in different forms, but generally have the following shapes:
Because the area near the center of the Gaussian curve is larger, using its value as the weight to blur the image will produce more natural results, because the close samples have higher priority. For example, if we sample a 32x32 box around a clip, the greater the distance from the clip, we will use smaller and smaller weights; This provides better and more realistic blur, called Gaussian blur.
In order to realize Gaussian Blur filter, we need a two-dimensional weight box, which can be obtained from the two-dimensional Gaussian curve equation. However, the problem with this approach is that it will soon become very performance-oriented. Taking the 32 x 32 fuzzy kernel as an example, this will require us to sample the texture a total of 1024 times for each fragment!
Fortunately for us, the Gauss equation has a very concise property, which allows us to divide the two-dimensional equation into two smaller one-dimensional equations: one describes the horizontal weight and the other describes the vertical weight. Then we first use the horizontal weight on the scene texture for horizontal blur, and then use the generated texture for vertical blur. Due to this attribute, the results are exactly the same, but this time it saves us incredible performance, because compared with 1024, we now only need to do 32 + 32 samples! This is called two pass Gaussian blur.
Gaussian blur clip shader:
#version 330 core out vec4 FragColor; in vec2 TexCoords; uniform sampler2D image; uniform bool horizontal; // Gaussian blur curve uniform float weight[5] = float[] (0.227027, 0.1945946, 0.1216216, 0.054054, 0.016216); void main() { //How long does a pixel occupy the curve vec2 tex_offset = 1.0 / textureSize(image, 0); // gets size of single texel vec3 result = texture(image, TexCoords).rgb * weight[0]; // current fragment's contribution //Horizontal sampling round if(horizontal) { for(int i = 1; i < 5; ++i) { result += texture(image, TexCoords + vec2(tex_offset.x * i, 0.0)).rgb * weight[i]; result += texture(image, TexCoords - vec2(tex_offset.x * i, 0.0)).rgb * weight[i]; } } else { //Vertical sampling round for(int i = 1; i < 5; ++i) { result += texture(image, TexCoords + vec2(0.0, tex_offset.y * i)).rgb * weight[i]; result += texture(image, TexCoords - vec2(0.0, tex_offset.y * i)).rgb * weight[i]; } } FragColor = vec4(result, 1.0); }
Here, we use relatively small Gaussian weight samples, which each of us uses to assign specific weights to the horizontal or vertical samples around the current clip. You can see that we divide the blur filter into horizontal and vertical parts according to the horizontal uniformity value we set. We divide the offset distance based on the texture size (vec2 from the texture size) by 1.0 to obtain the exact size of the texture element.
To blur the image, we created two basic frame buffers, each with only one color buffer texture:
unsigned int pingpongFBO[2]; unsigned int pingpongBuffer[2]; glGenFramebuffers(2, pingpongFBO); glGenTextures(2, pingpongBuffer); //Create two framebuffers for (unsigned int i = 0; i < 2; i++) { glBindFramebuffer(GL_FRAMEBUFFER, pingpongFBO[i]); glBindTexture(GL_TEXTURE_2D, pingpongBuffer[i]); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_FLOAT, NULL ); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, pingpongBuffer[i], 0 ); }
Then, after we obtain the HDR texture and the extracted luminance texture, we first fill a ping-pong frame buffer with the luminance texture, and then blur the image 10 times (that is, sampling and homogenization, 5 times horizontally and 5 times vertically):
bool horizontal = true, first_iteration = true; int amount = 10; shaderBlur.use(); //For the image, paste it horizontally and vertically for a total of 10 times. The more you paste, the more blurred it becomes for (unsigned int i = 0; i < amount; i++) { glBindFramebuffer(GL_FRAMEBUFFER, pingpongFBO[horizontal]); shaderBlur.setInt("horizontal", horizontal); glBindTexture( GL_TEXTURE_2D, first_iteration ? colorBuffers[1] : pingpongBuffers[!horizontal] ); RenderQuad(); horizontal = !horizontal; if (first_iteration) first_iteration = false; } glBindFramebuffer(GL_FRAMEBUFFER, 0);
In each iteration, we bind one of the two frame buffers according to whether we want horizontal blur or vertical blur, and bind the color buffer of the other frame buffer to the texture to be blurred. In the first iteration, we specifically bound the texture we want to blur, otherwise both color buffers will eventually be empty. By repeating this process 10 times, the brightness image will eventually appear complete Gaussian blur repeated 5 times. This structure allows us to blur any image as needed; The more Gaussian blur iterations, the stronger the blur.
By blurring the extracted luminance texture five times, we get the appropriate blurred images of all bright areas of the scene.
The final step in completing the Bloom effect is to combine this fuzzy brightness texture with the HDR texture of the original scene.
3.Blending both textures
With the HDR texture of the scene and the fuzzy brightness texture of the scene, we only need to combine the two to achieve the famous Bloom or luminous effect. In the final fragment shader (very similar to what we used in the HDR chapter), we add and mix the two textures:
#version 330 core out vec4 FragColor; in vec2 TexCoords; uniform sampler2D scene; uniform sampler2D bloomBlur; uniform float exposure; void main() { const float gamma = 2.2; vec3 hdrColor = texture(scene, TexCoords).rgb; vec3 bloomColor = texture(bloomBlur, TexCoords).rgb; // Superimpose bloom and hdr colors hdrColor += bloomColor; // additive blending // tone mapping // hdr tone mapping vec3 result = vec3(1.0) - exp(-hdrColor * exposure); // also gamma correct while we're at it result = pow(result, vec3(1.0 / gamma)); FragColor = vec4(result, 1.0); }
Interestingly, we added the bloom effect before applying tone mapping. In this way, the increased brightness of bloom is also gently converted to the LDR range, resulting in better relative illumination.
Adding these two textures together, all the bright areas of our scene now get the appropriate lighting effect:
The color cube appears brighter and provides a better illusion as a luminous object. This is a relatively simple scene, so the Bloom effect here is not very impressive, but in a well lit scene, if configured correctly, it will make a significant difference.
By taking more samples along a larger radius or repeating the blur filter an additional number of times, we can improve the blur effect. Since the quality of blur is directly related to the quality of Bloom effect, improving the blur step can produce significant improvement. Some of these improvements combine fuzzy filters with fuzzy kernels kernels of different sizes, or use multiple Gaussian curves to selectively combine weights. Kalogirou and Epic Games' additional resources discuss how to significantly improve the Bloom effect by improving Gaussian blur.