The results are as follows:
1. Rounding corners for preview controls
Set for Control ViewOutlineProvider
public RoundTextureView(Context context, AttributeSet attrs) { super(context, attrs); setOutlineProvider(new ViewOutlineProvider() { @Override public void getOutline(View view, Outline outline) { Rect rect = new Rect(0, 0, view.getMeasuredWidth(), view.getMeasuredHeight()); outline.setRoundRect(rect, radius); } }); setClipToOutline(true); }
Modify rounded corners and update them as needed
public void setRadius(int radius) { this.radius = radius; } public void turnRound() { invalidateOutline(); }
You can update the size of the rounded corners displayed by the control according to the rounded corners value you set.When the control is square and the rounded corner value is half the length of the edge, the circle is displayed.
2. Implement square Preview
1. Device supports 1:1 preview size
First, a simple but more limited implementation is described, which adjusts both the camera preview size and the preview control size to 1:1.
Android devices generally support multiple preview sizes, such as Samsung Tab S3
When using the Camera API, the following preview sizes are supported:
2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1920x1080 2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1280x720 2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1440x1080 2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1088x1088 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1056x864 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 960x720 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 720x480 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 640x480 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 352x288 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 320x240 2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 176x144
The preview size of 1:1 is 1088x1088.
When using the Camera2 API, its supported preview size (which actually includes PictureSize) is as follows:
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 4128x3096 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 4128x2322 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 3264x2448 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 3264x1836 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 3024x3024 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2976x2976 2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2880x2160 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2592x1944 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2560x1920 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2560x1440 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2560x1080 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2160x2160 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2048x1536 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2048x1152 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1936x1936 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1920x1080 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1440x1080 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1280x960 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1280x720 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 960x720 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 720x480 2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 640x480 2019-08-02 13:19:24.982 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 320x240 2019-08-02 13:19:24.982 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 176x144
The preview size of 1:1 is: 3024x3024, 2976x2976, 2160x2160, 1936x1936.
As long as we select a 1:1 preview size and set the preview control to a square, the square preview can be achieved.
A circular preview can be achieved by setting the corner of the preview control to half the length of the edge.2. Device does not support 1:1 preview size
Defect Analysis for Selecting 1:1 Preview Size
Resolution limitations
As mentioned above, we can choose a 1:1 preview size to preview, but it is more limited.
The selection range is very small.This scheme is not possible if the camera does not support a 1:1 preview size.
resource consumption
Samsung tab S3 is an example. When using Camera2 API, the device supports a large square preview size, which will consume more system resources for image processing and other operations.
Handle cases where 1:1 preview size is not supported
Add a 1:1 size ViewGroup
Put TextureView into ViewGroup
Set the margin value of TextureView to display the central square area
Sketch Map
Sample Code
//Keep the preview control proportional to the preview size to avoid stretching { FrameLayout.LayoutParams textureViewLayoutParams = (FrameLayout.LayoutParams) textureView.getLayoutParams(); int newHeight = 0; int newWidth = textureViewLayoutParams.width; //Horizontal Screen if (displayOrientation % 180 == 0) { newHeight = textureViewLayoutParams.width * previewSize.height / previewSize.width; } //Vertical screen else { newHeight = textureViewLayoutParams.width * previewSize.width / previewSize.height; } ////When not a square preview, add a layer of ViewGroup to restrict the display area of the View if (newHeight != textureViewLayoutParams.height) { insertFrameLayout = new RoundFrameLayout(CoverByParentCameraActivity.this); int sideLength = Math.min(newWidth, newHeight); FrameLayout.LayoutParams layoutParams = new FrameLayout.LayoutParams(sideLength, sideLength); insertFrameLayout.setLayoutParams(layoutParams); FrameLayout parentView = (FrameLayout) textureView.getParent(); parentView.removeView(textureView); parentView.addView(insertFrameLayout); insertFrameLayout.addView(textureView); FrameLayout.LayoutParams newTextureViewLayoutParams = new FrameLayout.LayoutParams(newWidth, newHeight); //Horizontal Screen if (displayOrientation % 180 == 0) { newTextureViewLayoutParams.leftMargin = ((newHeight - newWidth) / 2); } //Vertical screen else { newTextureViewLayoutParams.topMargin = -(newHeight - newWidth) / 2; } textureView.setLayoutParams(newTextureViewLayoutParams); } }
3. Use GLSurfaceView for more customized previews
Square and circle previews are done using the above method, but only for native cameras. How do we do circular previews when our data source is not a native camera?Next, we describe a scheme for displaying NV21 using GLSurfaceView, which accomplishes the drawing of preview data entirely on its own.
1. GLSurfaceView usage process
OpenGL Rendering YUV Data Flow
The focus is on the writing of Renderer, which is described as follows:
/**
* A generic renderer interface. * <p> * The renderer is responsible for making OpenGL calls to render a frame. * <p> * GLSurfaceView clients typically create their own classes that implement * this interface, and then call {@link GLSurfaceView#setRenderer} to * register the renderer with the GLSurfaceView. * <p> * * <div class="special reference"> * <h3>Developer Guides</h3> * <p>For more information about how to use OpenGL, read the * <a href="{@docRoot}guide/topics/graphics/opengl.html">OpenGL</a> developer guide.</p> * </div> * * <h3>Threading</h3> * The renderer will be called on a separate thread, so that rendering * performance is decoupled from the UI thread. Clients typically need to * communicate with the renderer from the UI thread, because that's where * input events are received. Clients can communicate using any of the * standard Java techniques for cross-thread communication, or they can * use the {@link GLSurfaceView#queueEvent(Runnable)} convenience method. * <p> * <h3>EGL Context Lost</h3> * There are situations where the EGL rendering context will be lost. This * typically happens when device wakes up after going to sleep. When * the EGL context is lost, all OpenGL resources (such as textures) that are * associated with that context will be automatically deleted. In order to * keep rendering correctly, a renderer must recreate any lost resources * that it still needs. The {@link #onSurfaceCreated(GL10, EGLConfig)} method * is a convenient place to do this. * * * @see #setRenderer(Renderer) */ public interface Renderer { /** * Called when the surface is created or recreated. * <p> * Called when the rendering thread * starts and whenever the EGL context is lost. The EGL context will typically * be lost when the Android device awakes after going to sleep. * <p> * Since this method is called at the beginning of rendering, as well as * every time the EGL context is lost, this method is a convenient place to put * code to create resources that need to be created when the rendering * starts, and that need to be recreated when the EGL context is lost. * Textures are an example of a resource that you might want to create * here. * <p> * Note that when the EGL context is lost, all OpenGL resources associated * with that context will be automatically deleted. You do not need to call * the corresponding "glDelete" methods such as glDeleteTextures to * manually delete these lost resources. * <p> * @param gl the GL interface. Use <code>instanceof</code> to * test if the interface supports GL11 or higher interfaces. * @param config the EGLConfig of the created surface. Can be used * to create matching pbuffers. */ void onSurfaceCreated(GL10 gl, EGLConfig config); /** * Called when the surface changed size. * <p> * Called after the surface is created and whenever * the OpenGL ES surface size changes. * <p> * Typically you will set your viewport here. If your camera * is fixed then you could also set your projection matrix here: * <pre class="prettyprint"> * void onSurfaceChanged(GL10 gl, int width, int height) { * gl.glViewport(0, 0, width, height); * // for a fixed camera, set the projection too * float ratio = (float) width / height; * gl.glMatrixMode(GL10.GL_PROJECTION); * gl.glLoadIdentity(); * gl.glFrustumf(-ratio, ratio, -1, 1, 1, 10); * } * </pre> * @param gl the GL interface. Use <code>instanceof</code> to * test if the interface supports GL11 or higher interfaces. * @param width * @param height */ void onSurfaceChanged(GL10 gl, int width, int height); /** * Called to draw the current frame. * <p> * This method is responsible for drawing the current frame. * <p> * The implementation of this method typically looks like this: * <pre class="prettyprint"> * void onDrawFrame(GL10 gl) { * gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); * //... other gl calls to render the scene ... * } * </pre> * @param gl the GL interface. Use <code>instanceof</code> to * test if the interface supports GL11 or higher interfaces. */ void onDrawFrame(GL10 gl); }
void onSurfaceCreated(GL10 gl, EGLConfig config)
Callback when Surface is created or rebuilt
void onSurfaceChanged(GL10 gl, int width, int height)
Callback when Surface size changes
void onDrawFrame(GL10 gl)
Drawing operations are implemented here.When we set renderMode to RENDERMODE_CONTINUOUSLY, the function will continue to execute.
When we set renderMode to RENDERMODE_WHEN_DIRTY, it will only execute after the creation is complete and requestRender is called.In general, we choose RENDERMODE_WHEN_DIRTY rendering mode to avoid overdrawing.
Typically, we implement a Renderer on our own, then set up a Renderer for GLSurfaceView, which can be said to be the core step of the process.The following is a flowchart of the initialization and drawing operations of void onSurfaceCreated(GL10 gl, EGLConfig config) and void onDrawFrame(GL10 gl):
Renderer rendering YUV data
2. Specific implementation
Introduction to coordinate systems
Android View coordinate system
OpenGL World Coordinate System
As shown in the figure, unlike Android's View coordinate system, OpenGL's coordinate system is Cartesian.
The coordinate system of Android View is based on the upper left corner, increasing to the right x and increasing to the down y.
OpenGL coordinate systems, on the other hand, originate from the center, increasing to the right x and up y.
Shader Writing
/** * Vertex Shader */ private static String VERTEX_SHADER = " attribute vec4 attr_position;\n" + " attribute vec2 attr_tc;\n" + " varying vec2 tc;\n" + " void main() {\n" + " gl_Position = attr_position;\n" + " tc = attr_tc;\n" + " }"; /** * Fragment Shader */ private static String FRAG_SHADER = " varying vec2 tc;\n" + " uniform sampler2D ySampler;\n" + " uniform sampler2D uSampler;\n" + " uniform sampler2D vSampler;\n" + " const mat3 convertMat = mat3( 1.0, 1.0, 1.0, -0.001, -0.3441, 1.772, 1.402, -0.7141, -0.58060);\n" + " void main()\n" + " {\n" + " vec3 yuv;\n" + " yuv.x = texture2D(ySampler, tc).r;\n" + " yuv.y = texture2D(uSampler, tc).r - 0.5;\n" + " yuv.z = texture2D(vSampler, tc).r - 0.5;\n" + " gl_FragColor = vec4(convertMat * yuv, 1.0);\n" + " }";
Interpretation of built-in variables
gl_Position
The gl_Position in the VERTEX_SHADER code represents the drawn spatial coordinates.Since we are drawing in two dimensions, we pass directly into the lower left (-1, -1), lower right (1, -1), upper left (-1,1), and upper right (1,1) of the OpenGL two-dimensional coordinate system, that is, {-1, -1, -1, -1, 1,1}
gl_FragColor
The gl_FragColor in the FRAG_SHADER code represents the color of a single chip
Other variable explanations
ySampler,uSampler,vSampler
Represents Y, U, V texture samplers, respectively
convertMat
According to the following formula:
R = Y + 1.402 (V - 128) G = Y - 0.34414 (U - 128) - 0.71414 (V - 128) B = Y + 1.772 (U - 128)
We can get a matrix of YUV to RGB
1.0, 1.0, 1.0, 0, -0.344, 1.77, 1.403, -0.714, 0
Interpretation of partial types and functions
vec3,vec4
They represent three-dimensional vectors and four-dimensional vectors, respectively.
vec4 texture2D(sampler2D sampler, vec2 coord)
Converts the sampler's image texture to a color value using a specified matrix; for example:
texture2D(ySampler, tc).r gets Y data.
texture2D(uSampler, tc).r gets U data.
texture2D(vSampler, tc).r gets the V data.
Initialize in Java code
Create ByteBuffer texture data corresponding to Y, U, V according to image width and height;
Select the corresponding conversion matrix according to whether it is mirrored or not and the rotation angle.
public void init(boolean isMirror, int rotateDegree, int frameWidth, int frameHeight) { if (this.frameWidth == frameWidth && this.frameHeight == frameHeight && this.rotateDegree == rotateDegree && this.isMirror == isMirror) { return; } dataInput = false; this.frameWidth = frameWidth; this.frameHeight = frameHeight; this.rotateDegree = rotateDegree; this.isMirror = isMirror; yArray = new byte[this.frameWidth * this.frameHeight]; uArray = new byte[this.frameWidth * this.frameHeight / 4]; vArray = new byte[this.frameWidth * this.frameHeight / 4]; int yFrameSize = this.frameHeight * this.frameWidth; int uvFrameSize = yFrameSize >> 2; yBuf = ByteBuffer.allocateDirect(yFrameSize); yBuf.order(ByteOrder.nativeOrder()).position(0); uBuf = ByteBuffer.allocateDirect(uvFrameSize); uBuf.order(ByteOrder.nativeOrder()).position(0); vBuf = ByteBuffer.allocateDirect(uvFrameSize); vBuf.order(ByteOrder.nativeOrder()).position(0); // Vertex coordinates squareVertices = ByteBuffer .allocateDirect(GLUtil.SQUARE_VERTICES.length * FLOAT_SIZE_BYTES) .order(ByteOrder.nativeOrder()) .asFloatBuffer(); squareVertices.put(GLUtil.SQUARE_VERTICES).position(0); //Texture coordinates if (isMirror) { switch (rotateDegree) { case 0: coordVertice = GLUtil.MIRROR_COORD_VERTICES; break; case 90: coordVertice = GLUtil.ROTATE_90_MIRROR_COORD_VERTICES; break; case 180: coordVertice = GLUtil.ROTATE_180_MIRROR_COORD_VERTICES; break; case 270: coordVertice = GLUtil.ROTATE_270_MIRROR_COORD_VERTICES; break; default: break; } } else { switch (rotateDegree) { case 0: coordVertice = GLUtil.COORD_VERTICES; break; case 90: coordVertice = GLUtil.ROTATE_90_COORD_VERTICES; break; case 180: coordVertice = GLUtil.ROTATE_180_COORD_VERTICES; break; case 270: coordVertice = GLUtil.ROTATE_270_COORD_VERTICES; break; default: break; } } coordVertices = ByteBuffer.allocateDirect(coordVertice.length * FLOAT_SIZE_BYTES).order(ByteOrder.nativeOrder()).asFloatBuffer(); coordVertices.put(coordVertice).position(0);
}
Renderer initialization when Surface creation is complete
private void initRenderer() { rendererReady = false; createGLProgram(); //Enable Texture GLES20.glEnable(GLES20.GL_TEXTURE_2D); //Create Texture createTexture(frameWidth, frameHeight, GLES20.GL_LUMINANCE, yTexture); createTexture(frameWidth / 2, frameHeight / 2, GLES20.GL_LUMINANCE, uTexture); createTexture(frameWidth / 2, frameHeight / 2, GLES20.GL_LUMINANCE, vTexture); rendererReady = true; }
Where createGLProgram creates the OpenGL Program and associates variables in the shader code
private void createGLProgram() { int programHandleMain = GLUtil.createShaderProgram(); if (programHandleMain != -1) { // Using the shader program GLES20.glUseProgram(programHandleMain); // Get the vertex shader variable int glPosition = GLES20.glGetAttribLocation(programHandleMain, "attr_position"); int textureCoord = GLES20.glGetAttribLocation(programHandleMain, "attr_tc"); // Get the fragment shader variable int ySampler = GLES20.glGetUniformLocation(programHandleMain, "ySampler"); int uSampler = GLES20.glGetUniformLocation(programHandleMain, "uSampler"); int vSampler = GLES20.glGetUniformLocation(programHandleMain, "vSampler"); //Assigning values to variables /** * GLES20.GL_TEXTURE0 Bind with ySampler * GLES20.GL_TEXTURE1 Bind with uSampler * GLES20.GL_TEXTURE2 Bind with vSampler * * That is, the second parameter of glUniform1i represents the layer sequence number */ GLES20.glUniform1i(ySampler, 0); GLES20.glUniform1i(uSampler, 1); GLES20.glUniform1i(vSampler, 2); GLES20.glEnableVertexAttribArray(glPosition); GLES20.glEnableVertexAttribArray(textureCoord); /** * Set Vertex Shader Data */ squareVertices.position(0); GLES20.glVertexAttribPointer(glPosition, GLUtil.COUNT_PER_SQUARE_VERTICE, GLES20.GL_FLOAT, false, 8, squareVertices); coordVertices.position(0); GLES20.glVertexAttribPointer(textureCoord, GLUtil.COUNT_PER_COORD_VERTICES, GLES20.GL_FLOAT, false, 8, coordVertices); }
}
Where createTexture is used to create textures based on width, height, and format
private void createTexture(int width, int height, int format, int[] textureId) { //Create Texture GLES20.glGenTextures(1, textureId, 0); //Binding Texture GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId[0]); /** * {@link GLES20#GL_TEXTURE_WRAP_S}Texture wrapping mode representing left and right directions * {@link GLES20#GL_TEXTURE_WRAP_T}Texture wrapping mode for up and down directions * * {@link GLES20#GL_REPEAT}: repeat * {@link GLES20#GL_MIRRORED_REPEAT}: mirror repeat * {@link GLES20#GL_CLAMP_TO_EDGE}: Ignore border interception * * For example, we use {@link GLES20#GL_REPEAT}: * * squareVertices coordVertices * -1.0f, -1.0f, 1.0f, 1.0f, * 1.0f, -1.0f, 1.0f, 0.0f, -> Same as textureView Preview * -1.0f, 1.0f, 0.0f, 1.0f, * 1.0f, 1.0f 0.0f, 0.0f * * squareVertices coordVertices * -1.0f, -1.0f, 2.0f, 2.0f, * 1.0f, -1.0f, 2.0f, 0.0f, -> Compared to the textureView preview, it is split into four identical previews (lower left, lower right, upper left, upper right) * -1.0f, 1.0f, 0.0f, 2.0f, * 1.0f, 1.0f 0.0f, 0.0f */ GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT); GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT); /** * {@link GLES20#GL_TEXTURE_MIN_FILTER}Represents how the displayed texture is longer than the loaded texture * {@link GLES20#GL_TEXTURE_MAG_FILTER}Represents when the displayed texture is larger than the loaded texture * * {@link GLES20#GL_NEAREST}: Use the color of the nearest pixel in the texture as the pixel color to draw * {@link GLES20#GL_LINEAR}: Using several of the closest coordinate colors in the texture, a weighted averaging algorithm is used to get the pixel colors that need to be drawn */ GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST); GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR); GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, format, width, height, 0, format, GLES20.GL_UNSIGNED_BYTE, null); }
Call drawing in Java code
Clip and pass in frame data when data source is acquired
@Override public void onPreview(final byte[] nv21, Camera camera) { //Clip the specified image area ImageUtil.cropNV21(nv21, this.squareNV21, previewSize.width, previewSize.height, cropRect); //Refresh GLSurfaceView roundCameraGLSurfaceView.refreshFrameNV21(this.squareNV21);
}
NV21 Data Clipping Code
/** * Clipping NV21 data * * @param originNV21 Raw NV21 data * @param cropNV21 Clipping result NV21 data, requires pre-allocation of memory * @param width Width of original data * @param height Height of original data * @param left Left boundary of the area where the original data was clipped * @param top The upper boundary of the area where the original data was clipped * @param right Right boundary of the area where the original data was clipped * @param bottom The lower boundary of the area where the original data was clipped */ public static void cropNV21(byte[] originNV21, byte[] cropNV21, int width, int height, int left, int top, int right, int bottom) { int halfWidth = width / 2; int cropImageWidth = right - left; int cropImageHeight = bottom - top; //Original Data Y Upper Left int originalYLineStart = top * width; int targetYIndex = 0; //Original data UV top left int originalUVLineStart = width * height + top * halfWidth; //UV Start Value for Target Data int targetUVIndex = cropImageWidth * cropImageHeight; for (int i = top; i < bottom; i++) { System.arraycopy(originNV21, originalYLineStart + left, cropNV21, targetYIndex, cropImageWidth); originalYLineStart += width; targetYIndex += cropImageWidth; if ((i & 1) == 0) { System.arraycopy(originNV21, originalUVLineStart + left, cropNV21, targetUVIndex, cropImageWidth); originalUVLineStart += width; targetUVIndex += cropImageWidth; } }
}
Pass to GLSurafceView and refresh frame data
/** * Incoming NV21 Refresh Frame * * @param data NV21 data */ public void refreshFrameNV21(byte[] data) { if (rendererReady) { yBuf.clear(); uBuf.clear(); vBuf.clear(); putNV21(data, frameWidth, frameHeight); dataInput = true; requestRender(); }
}
Where putNV21 is used to separate Y, U, V data from NV21
/** * Remove Y, U, V components of NV21 data * * @param src nv21 Frame data * @param width width * @param height height */ private void putNV21(byte[] src, int width, int height) { int ySize = width * height; int frameSize = ySize * 3 / 2; //Take component y value System.arraycopy(src, 0, yArray, 0, ySize); int k = 0; //Take component uv value int index = ySize; while (index < frameSize) { vArray[k] = src[index++]; uArray[k++] = src[index++]; } yBuf.put(yArray).position(0); uBuf.put(uArray).position(0); vBuf.put(vArray).position(0);
}
After requestRender executes, the onDrawFrame function is callback, where three textures are data bound and drawn
@Override public void onDrawFrame(GL10 gl) { // Activate, bind, and set data operations for each texture if (dataInput) { //y GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, yTexture[0]); GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, frameWidth, frameHeight, GLES20.GL_LUMINANCE, GLES20.GL_UNSIGNED_BYTE, yBuf); //u GLES20.glActiveTexture(GLES20.GL_TEXTURE1); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, uTexture[0]); GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, frameWidth >> 1, frameHeight >> 1, GLES20.GL_LUMINANCE, GLES20.GL_UNSIGNED_BYTE, uBuf); //v GLES20.glActiveTexture(GLES20.GL_TEXTURE2); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, vTexture[0]); GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, frameWidth >> 1, frameHeight >> 1, GLES20.GL_LUMINANCE, GLES20.GL_UNSIGNED_BYTE, vBuf); //Draw after data binding is complete GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4); } }
This completes the drawing.
4. Add a layer of border
Sometimes the need is not just for a circular preview, but for a camera preview we might want to add a border.
Border Effect
In the same way, we dynamically modify the border values and redraw them.
The code in the Border Custom View is as follows:
@Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); if (paint == null) { paint = new Paint(); paint.setStyle(Paint.Style.STROKE); paint.setAntiAlias(true); SweepGradient sweepGradient = new SweepGradient(((float) getWidth() / 2), ((float) getHeight() / 2), new int[]{Color.GREEN, Color.CYAN, Color.BLUE, Color.CYAN, Color.GREEN}, null); paint.setShader(sweepGradient); } drawBorder(canvas, 6); } private void drawBorder(Canvas canvas, int rectThickness) { if (canvas == null) { return; } paint.setStrokeWidth(rectThickness); Path drawPath = new Path(); drawPath.addRoundRect(new RectF(0, 0, getWidth(), getHeight()), radius, radius, Path.Direction.CW); canvas.drawPath(drawPath, paint); } public void turnRound() { invalidate(); } public void setRadius(int radius) { this.radius = radius; }
5. Full Demo Code:
https://github.com/wangshengy...
Use the Camera API and the Amera2 API and select the closest square preview size
Use the Camera API and dynamically add a layer of parent controls for a square Preview
Use the Camera API to get preview data, use OpenGL to display it. Finally, recommend a useful sdk for Android free offline face recognition, which can be a perfect combination of this technology: https://ai.arcsoft.com.cn/thi...