Simple Implementation of Shadowmap

Posted by shadiadiph on Tue, 23 Jul 2019 15:43:45 +0200

I haven't realized the shadow myself before, but I understand it conceptually. This time, I write it through Demo.

Generally speaking, nothing can be optimized, but for windows, which can be replaced by facets, it seems that they can be optimized to map, which was done by demo at arm's chess house before.

 

Back to Shadowmap, the main idea is to get the world coordinate position by depth map, so the light source position renders a scene depth map to get the world coordinate of the light source position pixels.

If the distance between the two world coordinates is less than the error, then both of them can see the point, then the point is not in the shadow, otherwise it is in the shadow area.

Of course, there are many more efficient ways to do it. The problem of how to switch the pixels in A camera to B camera can be realized by projection transformation.

Camera.main.WorldToViewportPoint. Converting the world coordinate position to the viewport coordinate, the range of 0-1, is directly applicable to mapping sampling.

 

It's relatively simple and straightforward. I've saved some operations on the Internet, so let's look at the specific steps.

 

1. Mount a camera on the sub-node of the light source and display it orthogonally, but the light source camera should be able to see the model of the object to be projected. Then mount the script of the camera's rendering depth to the light source camera, where you lazy and use OnRenderImage directly.

public class LightShadowMapFilter : MonoBehaviour
{
    public Material mat;


    void Awake()
    {
        GetComponent<Camera>().depthTextureMode |= DepthTextureMode.Depth;
    }

    void OnRenderImage(RenderTexture source, RenderTexture destination)
    {
        Graphics.Blit(source, destination, mat);
    }
}
LightShadowMapFilter

 

2. This script needs a material sphere parameter. The shader of the material sphere is the Shader that returns the depth, but in order to facilitate the direct return of the world coordinate information here, as follows:

Shader "ShadowMap/DepthRender"//Rendering depth
{
    Properties
    {
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100
        ZTest Always
        Cull Off
        ZWrite Off

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct v2f
            {
                float4 vertex : SV_POSITION;
                float2 uv : TEXCOORD0;
            };

            sampler2D _CameraDepthTexture;//Depth Map Input by Light Source Camera

            #define NONE_ITEM_EPS 0.99//If you don't render an object, it's more cumbersome, so set up a EPS threshold

            v2f vert(appdata_img v)
            {
                v2f o = (v2f)0;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = v.texcoord.xy;

                return o;
            }

            float4 GetWorldPositionFromDepthValue(float2 uv, float linearDepth)//Getting the World Coordinate Position by Depth
            {
                float camPosZ = _ProjectionParams.y + (_ProjectionParams.z - _ProjectionParams.y) * linearDepth;

                float height = 2 * camPosZ / unity_CameraProjection._m11;
                float width = _ScreenParams.x / _ScreenParams.y * height;

                float camPosX = width * uv.x - width / 2;
                float camPosY = height * uv.y - height / 2;
                float4 camPos = float4(camPosX, camPosY, camPosZ, 1.0);
                return mul(unity_CameraToWorld, camPos);
            }

            fixed4 frag (v2f i) : SV_Target
            {
                float rawDepth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
                float linearDepth = Linear01Depth(rawDepth);
                if (linearDepth > NONE_ITEM_EPS) return 0;//If there is no object, return 0
                return fixed4(GetWorldPositionFromDepthValue(i.uv, linearDepth).xyz, 1);//If there is an object, it returns the world coordinate information.
            }
            ENDCG
        }
    }
}
ShadowMap/DepthRender

 

3. Add a RenderTexture to the light source camera. Create a static image in the Project directly. Set the resolution to 1024. The light source part is over.

 

4. Next, we begin to process the logic of the main camera. According to the official forum information camera WorldToViewportPoint equivalents are implemented here:

https://forum.unity.com/threads/camera-worldtoviewportpoint-math.644383/

Because merging into a matrix is not intuitive, the last step of projecting coordinates to NDC and then to the viewport is put into the shader.

Then the first two steps in the forum post are to deal with the script of the depth map of the light source and the VP matrix of the light source camera, as follows:

using System.Collections;
using System.Collections.Generic;
using System;
using UnityEngine;

public class ShadowMapArgumentUpdate : MonoBehaviour
{
    public RenderTexture lightWorldPosTexture;
    public Camera depthCamera;

    void Update()
    {
        var viewMatrix = depthCamera.worldToCameraMatrix;
        var projMatrix = GL.GetGPUProjectionMatrix(depthCamera.projectionMatrix, false) * viewMatrix;

        Shader.SetGlobalTexture("_LightWorldPosTex", lightWorldPosTexture);
        Shader.SetGlobalMatrix("_LightProjMatrix", projMatrix);
    }
}
ShadowMapArgumentUpdate

Note that the projection matrix should be processed by the transformation function of the GL class to prevent the inconsistency between OpenGL and DX. This script can then be mounted on the main camera or on a GameObject.

The Render Texture and Deep Camera that need to be mounted were created before.

 

5. Finally, the shader that receives Shadowmap. The pixel positions of two world coordinates are compared in distance.

Shader "ShadowMap/ShadowMapProcess"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma multi_compile_fog

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float4 vertex : SV_POSITION;
                float2 uv : TEXCOORD0;
                float3 worldPos : TEXCOORD1;
            };

            sampler2D _LightWorldPosTex;
            sampler2D _MainTex;
            matrix _LightProjMatrix;
            float4 _MainTex_ST;

            //https://forum.unity.com/threads/camera-worldtoviewportpoint-math.644383/
            float3 Proj2ViewportPosition(float4 pos)
            {
                float3 ndcPosition = float3(pos.x / pos.w, pos.y / pos.w, pos.z / pos.w);
                float3 viewportPosition = float3(ndcPosition.x*0.5 + 0.5, ndcPosition.y*0.5 + 0.5, -ndcPosition.z);

                return viewportPosition;
            }
            
            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;//World coordinates

                return o;
            }

#define EPS 0.01//Minimum distance difference between two pixels

            fixed4 frag (v2f i) : SV_Target
            {
                float4 col = tex2D(_MainTex, i.uv);
                float3 lightViewportPosition = Proj2ViewportPosition(mul(_LightProjMatrix, float4(i.worldPos,1)));
                //Here it is. Camera.main.WorldToViewport The latter half of the treatment
                
                float4 lightWorldPos = tex2D(_LightWorldPosTex, lightViewportPosition.xy);
                //Since it's the viewport coordinate, direct sampling mapping is enough.

                if (lightWorldPos.a > 0 && distance(i.worldPos, lightWorldPos) > EPS) return col * 0.2;
                //If the distance between two pixels is greater than the error, it will be shadow. Of course, the small problem is unavoidable. This is only for learning purposes.
                return col;//Returns the original pixel if it is not in shadow
            }
            ENDCG
        }
    }
}
ShadowMap/ShadowMapProcess

 

6. The results are as follows.

Topics: PHP Fragment Unity Windows less