Preface
MediaCodec was originally planned to be introduced. When I wrote Demo, I found that it was to be used in conjunction with other API s, so it was delayed. Let's start with MediaExtractor and MediaMuxer.
The first concept chapter has already been introduced. Let's review it briefly.
MediaExtractor
Multimedia extractor, through which audio or video files can be operated separately, such as audio and video extraction, synthesis and other operations.
MediaMuxer
Multimedia synthesizer is similar to MediaExtractor in function.
text
Today's case is breakpoint recording, after recording, synthesizing files and playing them. First of all, we need to turn on the camera by ourselves and be able to preview the picture of the camera, so that the conventional operation is not introduced here. Here we use the simplest MediaRecorder to synthesize video. First we initialize the MediaRecorder:
mediaRecorder = MediaRecorder() mediaRecorder!!.setCamera(camera) mediaRecorder!!.setOrientationHint(90) mediaRecorder!!.setAudioSource(MediaRecorder.AudioSource.MIC) mediaRecorder!!.setVideoSource(MediaRecorder.VideoSource.CAMERA) // If you need to set the specified format, be sure to pay attention to the calling order of the following API s mediaRecorder!!.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4) mediaRecorder!!.setAudioEncoder(MediaRecorder.AudioEncoder.AAC) mediaRecorder!!.setVideoEncoder(MediaRecorder.VideoEncoder.H264) // This configuration cannot be used with setOutputFormat // mediaRecorder!!.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_720P)) mediaRecorder!!.setOutputFile("${saveDir.absoluteFile}/record_${System.currentTimeMillis()}.mp4") mediaRecorder!!.setPreviewDisplay(surface_view.holder.surface) // start recording mediaRecorder!!.prepare() mediaRecorder!!.start()
There's a pit here:
mediaRecorder!!.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_720P)) and setOutputFormat cannot be used together.
Maybe it's because mediaRecorder!!.setProfile() calls setOutputFormat. If you use setOutputFormat together, it will be called twice and then throw an exception, you need to pay attention to it.
In addition, we must pay attention to the invocation order of setXXXSource, setOutputFormat, setXXXEncoder, which does not throw an exception directly. If you encounter an IllegalStateException exception, you must check the source code comments immediately. Eighty percent of the invocation order is wrong.
Very simple page, I made a recording time detection here, recorded enough time to start video synthesis. Let's first record several videos with MediaRecorder, as follows:
I've already recorded three videos, and then the video is synthesized.
First, we clarify the idea of video synthesis:
Video synthesis, in fact, uses IO streams to write the contents of each file to the output file in order.
Create MediaMuxer:
// Create MediaMuxer val mediaMuxer = MediaMuxer(outPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4) mediaMuxer.setOrientationHint(90)
The reason to rotate 90 degrees is that the video of the camera itself rotates 90 degrees, so the setting of 90 degrees is to set the direction of the face-to-face frequency.
Video files have audio and video contents. We need to extract them separately and write them to the audio and visual tracks in the output files. Then we need to know their basic information:
// Find Video and Audio Formats for Files var findAudioFormat = false var findVideoFormat = false var audioFormat: MediaFormat? = null var videoFormat: MediaFormat? = null for (file in videoList) { val mediaExtractor = MediaExtractor() mediaExtractor.setDataSource(file.absolutePath) if (!findAudioFormat) { // Find the audio format in the file audioFormat = findFormat(mediaExtractor, "audio/") findAudioFormat = audioFormat != null } if (!findVideoFormat) { // Find the video format in the file videoFormat = findFormat(mediaExtractor, "video/") Log.e("lzp", videoFormat.toString()) findVideoFormat = videoFormat != null } mediaExtractor.release() if (findAudioFormat && findVideoFormat) { break } } private fun findFormat(mediaExtractor: MediaExtractor, prefix: String): MediaFormat? { for (i in 0 until mediaExtractor.trackCount) { val format = mediaExtractor.getTrackFormat(i) val mime = format.getString("mime") if (mime.startsWith(prefix)) { return format } } return null }
Through the above code, we know the format of audio and video in the file, and then we need to create the tracks of the two formats in the output file:
var mediaMuxerAudioTrackIndex = 0 // Composite files add audio tracks in specified formats if (findAudioFormat) { mediaMuxerAudioTrackIndex = mediaMuxer.addTrack(audioFormat!!) } // Composite files add views in specified formats var mediaMuxerVideoTrackIndex = 0 if (findVideoFormat) { mediaMuxerVideoTrackIndex = mediaMuxer.addTrack(videoFormat!!) } // Start synthesis mediaMuxer.start()
With the mediaMuxer.addTrack() method, we created the audio and visual tracks in the output file, and then we wrote the contents in the file in order.
Travel through each file in the file list to find the corresponding audio and visual tracks:
// Document track val audioMediaExtractor = MediaExtractor() audioMediaExtractor.setDataSource(file.absolutePath) val audioTrackIndex = findTrackIndex(audioMediaExtractor, "audio/") if (audioTrackIndex >= 0) { audioMediaExtractor.selectTrack(audioTrackIndex) hasAudio = true } // Track View of Documents val videoMediaExtractor = MediaExtractor() videoMediaExtractor.setDataSource(file.absolutePath) val videoTrackIndex = findTrackIndex(videoMediaExtractor, "video/") if (videoTrackIndex >= 0) { videoMediaExtractor.selectTrack(videoTrackIndex) hasVideo = true } // If there is no audio or video, skip the file directly. if (!hasAudio && !hasVideo) { audioMediaExtractor.release() videoMediaExtractor.release() continue }
The above code is similar to the original audio and video format. The only difference is that it calls audioMediaExtractor. selectTrack Index to select the track specified in the file. Then when we read the data, we only get the data in the track.
Next is the normal operation of reading and writing. Let's take audio as an example.
// Write on track if (hasAudio) { var hasDone = false var lastPts = 0L while (!hasDone) { mReadBuffer.rewind() // Read the data of the sound track val frameSize = audioMediaExtractor.readSampleData(mReadBuffer, 0) // The data has been read out. if (frameSize < 0) { hasDone = true } else { // MediaCodec.BufferInfo() is used here to save the information of the data to be written. val bufferInfo = MediaCodec.BufferInfo() bufferInfo.offset = 0 bufferInfo.size = frameSize // Timestamp of data must be aligned, otherwise there may be asynchrony between audio and video. bufferInfo.presentationTimeUs = audioPts + audioMediaExtractor.sampleTime // Determine whether it is a key frame if ((audioMediaExtractor.sampleFlags and MediaCodec.BUFFER_FLAG_KEY_FRAME) != 0) { bufferInfo.flags = MediaCodec.BUFFER_FLAG_KEY_FRAME } mReadBuffer.rewind() // Write to a composite file mediaMuxer.writeSampleData(mediaMuxerAudioTrackIndex, mReadBuffer, bufferInfo) // Update the extracted location and read the new content next time audioMediaExtractor.advance() // Timestamp alignment if (audioMediaExtractor.sampleTime > 0) { lastPts = audioMediaExtractor.sampleTime } } } audioPts += lastPts // End-of-use release of resources audioMediaExtractor.release() }
Video writing is the same process, so I won't do much about it here.
Finally, don't forget to release resources:
mediaMuxer.stop() mediaMuxer.release()
summary
This case is very simple, we understand the basic use of MediaExtractor and MediaMuxer, and understand the basic process of video synthesis, our goal is achieved. You can refer to Demo for other questions.