Wind farm visualization: painting particles

Posted by Sfoot on Sat, 05 Mar 2022 06:51:11 +0100

Introduction

understand Wind field data Then go to see how to paint particles.

Draw map particles

Check the source library and find that there is a single Canvas drawing map to obtain the world map coastline coordinates. The main formats are as follows:

{
  "type": "FeatureCollection",
  "features": [
    {
      "type": "Feature",
      "properties": {
        "scalerank": 1,
        "featureclass": "Coastline"
      },
      "geometry": {
        "type": "LineString",
        "coordinates": [
          [
              -163.7128956777287,
              -78.59566741324154
          ],
          // Data omission
        ]
      }
    },
    // Data omission
  ]
}

The points corresponding to these coordinates can be connected to form an overall outline. The main logic is as follows:

  // ellipsis
  for (let i = 0; i < len; i++) {
    const coordinates = data[i].geometry.coordinates || [];
    const coordinatesNum = coordinates.length;
    for (let j = 0; j < coordinatesNum; j++) {
      context[j ? "lineTo" : "moveTo"](
        ((coordinates[j][0] + 180) * node.width) / 360,
        ((-coordinates[j][1] + 90) * node.height) / 180
      );
  }
  // ellipsis

According to the actual width and height of Canvas, it is mapped in proportion to the width and height of the generated wind farm picture.

A separate logical example of mapping is shown in here.

Paint wind particles

View the source library. There is a separate Canvas to paint wind particles. When looking at the source code, I found that the logic involved more states. I plan to understand the logic of drawing static particles separately.

Static wind particle effect see Examples.

Let's first clarify the main ideas of implementation:

  • The wind speed is mapped to the R and G components of the pixel color coding, thereby generating a picture W.
  • Create color data for display and store it in texture T1.
  • Based on the number of particles, create and buffer the data that stores the particle index. The data of relevant information of each particle is also created and stored in texture T2.
  • Load the picture W and store the picture data in the texture T3.
  • During vertex shader processing, the corresponding data will be obtained from texture T2 according to the particle index. After conversion, a position P will be generated and passed to the slice shader.
  • The slice shader obtains the data from the picture texture T3 according to the position P and performs linear mixing to obtain a value N, and obtains the corresponding color in the color texture T1 according to N.

Let's take a look at the specific implementation.

Color data

Main logic of generating color data:

function getColorRamp(colors) {
  const canvas = document.createElement("canvas");
  const ctx = canvas.getContext("2d");

  canvas.width = 256;
  canvas.height = 1;
  // Createlineargradient usage: https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/createLinearGradient
  const gradient = ctx.createLinearGradient(0, 0, 256, 0);
  for (const stop in colors) {
    gradient.addColorStop(+stop, colors[stop]);
  }

  ctx.fillStyle = gradient;
  ctx.fillRect(0, 0, 256, 1);

  return new Uint8Array(ctx.getImageData(0, 0, 256, 1).data);
}

Here, the data is obtained by creating a gradient Canvas. Because it corresponds to the color, a color component is stored as 8-bit binary, with a total of 256 kinds.

The data in Canvas needs enough size to be put into the texture: 16 * 16 = 256. The width and height here will be used in the later slice shader. These two places need to be consistent to achieve the expected results.

this.colorRampTexture = util.createTexture(
  this.gl,
  this.gl.LINEAR,
  getColorRamp(colors),
  16,
  16
);

Vertex data and state data

Main logic:

set numParticles(numParticles) {
  const gl = this.gl;

  const particleRes = (this.particleStateResolution = Math.ceil(
    Math.sqrt(numParticles)
  ));
  // Total number of particles
  this._numParticles = particleRes * particleRes;
  // Color information of all particles
  const particleState = new Uint8Array(this._numParticles * 4);
  for (let i = 0; i < particleState.length; i++) {
    // Generate a random color, which will correspond to the position in the picture
    particleState[i] = Math.floor(Math.random() * 256);
  }
  // Creates a texture that stores all particle color information
  this.particleStateTexture = util.createTexture(
    gl,
    gl.NEAREST,
    particleState,
    particleRes,
    particleRes
  );
  // Particle index
  const particleIndices = new Float32Array(this._numParticles);
  for (let i = 0; i < this._numParticles; i++) particleIndices[i] = i;
  this.particleIndexBuffer = util.createBuffer(gl, particleIndices);
}

The color information of particles will be stored in the texture. Here, a texture with equal width and height is created. Each particle color RGBA has 4 components, and each component has 8 bits. Note that the size range of the random color component generated here is [0, 256).

From the following logic, the vertex data particleIndexBuffer here is used to assist in calculating the final position, and the actual position is related to the texture. See the specific implementation of vertex shader below for more details.

Vertex Shader

Vertex shader and corresponding bound variables:

const drawVert = `
  precision mediump float;

  attribute float a_index;

  uniform sampler2D u_particles;
  uniform float u_particles_res;

  varying vec2 v_particle_pos;

  void main(){
      vec4 color=texture2D(u_particles,vec2(
              fract(a_index/u_particles_res),
              floor(a_index/u_particles_res)/u_particles_res));
  // Decodes the current particle position from the RGBA value of the pixel
  v_particle_pos=vec2(
          color.r / 255.0 + color.b,
          color.g / 255.0 + color.a);

      gl_PointSize = 1.0;
      gl_Position = vec4(2.0 * v_particle_pos.x - 1.0, 1.0 - 2.0 * v_particle_pos.y, 0, 1);
  }
`;

// Code omission
util.bindAttribute(gl, this.particleIndexBuffer, program.a_index, 1);
// Code omission
util.bindTexture(gl, this.particleStateTexture, 1);
// Code omission
gl.uniform1i(program.u_particles, 1);
// Code omission
gl.uniform1f(program.u_particles_res, this.particleStateResolution);

From these scattered logic, find the actual value corresponding to the variable in the shader:

  • a_index: particle index data in particleindexes.
  • u_particles: texture of all particle color information.
  • u_particles_res: the value of particleStateResolution is consistent with the width and height of the texture particleStateTexture. It is also the square root of the total number of particles and the square root of the length of particle index data.

According to these corresponding values, let's look at the main processing logic:

vec4 color=texture2D(u_particles,vec2(
              fract(a_index/u_particles_res),
              floor(a_index/u_particles_res)/u_particles_res));

First introduce two function information:

  • floor(x): returns the maximum integer value less than or equal to X.
  • Fragment (x): returns x - floor(x), that is, the decimal part of X.

Assuming that the total number of particles is 4, particleindexes = [0,1,2,3], u_particles_res = 2, then the two-dimensional coordinates are vec2(0,0), vec2(0.5,0), vec2(0,0.5) and vec2(0.5,0.5). The calculation method here ensures that the obtained coordinates are between 0 and 1, so that the color information can be collected in the texture particleStateTexture.

It should be noted here that the returned value range of texture2D acquisition is [0, 1]. See the specific principle here.

v_particle_pos=vec2(
        color.r / 255.0 + color.b,
        color.g / 255.0 + color.a);

The source code notes that "decode the current particle position from the RGBA value of the pixel". Combined with the previous data, the theoretical range of the component obtained by this calculation method is [0, 256 / 255],. Variable v_particle_pos will be used in the slice shader.

gl_Position = vec4(2.0 * v_particle_pos.x - 1.0, 1.0 - 2.0 * v_particle_pos.y, 0, 1);

gl_ The position variable is the coordinate value converted from the vertex to the clipping space. The space range is [- 1.0, + 1.0]. If you want to display it, you must be within this range. The calculation method here achieves this purpose.

Fragment Shader

Slice shader and corresponding bound variables:

const drawFrag = `
  precision mediump float;

  uniform sampler2D u_wind;
  uniform vec2 u_wind_min;
  uniform vec2 u_wind_max;
  uniform sampler2D u_color_ramp;

  varying vec2 v_particle_pos;

  void main() {
      vec2 velocity = mix(u_wind_min, u_wind_max, texture2D(u_wind, v_particle_pos).rg);
      float speed_t = length(velocity) / length(u_wind_max);

      vec2 ramp_pos = vec2(
          fract(16.0 * speed_t),
          floor(16.0 * speed_t) / 16.0);

      gl_FragColor = texture2D(u_color_ramp, ramp_pos);
  }
`;

// Code omission
util.bindTexture(gl, this.windTexture, 0);
// Code omission
gl.uniform1i(program.u_wind, 0); // Wind texture data
// Code omission
util.bindTexture(gl, this.colorRampTexture, 2);
// Code omission
gl.uniform1i(program.u_color_ramp, 2); // Color data
// Code omission
gl.uniform2f(program.u_wind_min, this.windData.uMin, this.windData.vMin);
gl.uniform2f(program.u_wind_max, this.windData.uMax, this.windData.vMax);

From these scattered logic, find the actual value corresponding to the variable in the shader:

  • u_wind: windTexture generated by wind farm picture.
  • u_wind_min: minimum value of wind field data component.
  • u_wind_max: maximum value of wind field data component.
  • u_color_ramp: creates a color texture called colorRampTexture.
  • v_particle_pos: the position generated in the vertex shader.
vec2 velocity = mix(u_wind_min, u_wind_max, texture2D(u_wind, v_particle_pos).rg);
float speed_t = length(velocity) / length(u_wind_max);

First introduce the built-in functions:

  • mix(x, y, a): the linear mixture of X and y will be returned. The calculation method is the same as x*(1-a) + y*a.

The value of velocity ensures that it is in u_wind_min and u_wind_max, then speed_ The result of t must be less than or equal to 1. According to speed_t get the position ramp according to certain rules_ POS, get the color output to the screen in the color texture colorRampTexture.

draw

After the above logic is ready, the drawing can be executed in the normal order.

Although static particles are drawn, it is found in the process of separate extraction that if different numbers of particles are drawn only once, wind Draw(), may not finish drawing.

Static wind particle effect see Examples.

Summary

After the above code logic analysis, look back at the main idea at the beginning and express it in another way:

  • According to the number of particles to be displayed, the color coding information of each particle is initialized randomly and stored in texture T2; Create the color texture T1 of the final display particles; Load the picture W generated by wind speed and store it in texture T3.
  • The ultimate goal is to obtain the color from the color texture T1 and display it. The way of this process is to find a corresponding wind speed mapping point from the texture T3 according to the texture T2, and then find the corresponding display color from T1 according to this point.

I feel better than the main idea at the beginning, but I still have some questions.

Why not directly associate texture T3 with color texture T1?

At present, this is only a part of the visualization logic of the whole wind farm. Look back at the complete implementation effect: it is dynamic. Then, in order to track the movement of each particle and add an implementation method of related recording variables, my personal feeling will be clearer in logic. Texture T2 is mainly used to record the number and state of particles, and I will continue to go deep into related logic in the future.

What is the basis of 2D vector calculation for texture sampling in vertex shader?

The corresponding reason is why the following logic is used:

vec2(
  fract(a_index/u_particles_res),
  floor(a_index/u_particles_res)/u_particles_res
)

In the previous specific explanation, it is said that such a calculation method ensures that the coordinates obtained are between 0 and 1, but there should be more than one way to generate this range. I don't know why I choose this one. The final position ramp is calculated in the later slice shader_ POS uses a similar method.

The slice shader has already got a position. Why calculate velocity to get a position again?

That is why there is the following logic:

vec2 velocity = mix(u_wind_min, u_wind_max, texture2D(u_wind, v_particle_pos).rg);
float speed_t = length(velocity) / length(u_wind_max);

Get position V from vertex shader_ particle_ POS is obtained based on the randomly generated color texture T2. As mentioned earlier, the theoretical range of component value calculation is [0, 256 / 255]. It is not guaranteed that the corresponding points can be found in the wind farm picture, so a correlation can be generated through the mix function.

Compute ramp in slice shader_ Why is the coefficient of POS multiplication 16.0?

This is the following logic:

vec2 ramp_pos = vec2(
    fract(16.0 * speed_t),
    floor(16.0 * speed_t) / 16.0
  );

It is found that 16.0 here is consistent with the width and height of the color texture T1 used to generate the final display. It is speculated that such consistency can achieve a uniform effect.

reference material

Topics: Javascript html5 webgl canvas