The live broadcast delivery system realizes the functions that a complete live broadcast system should have

Posted by misfits on Wed, 19 Jan 2022 01:22:40 +0100

How to develop a complete live delivery system, we first need to collect the video and audio functions of the anchor, and then transfer them to the streaming media server. This article mainly explains how to collect the video and audio functions of the anchor. At present, you can switch the front and rear cameras and focus cursor. The live delivery system has an independent beauty SDK, so you can see a different you. In the future, articles on other functions of live broadcasting will be released one after another.
First, explain the steps of capturing audio and video in the live delivery system:

1.establish AVCaptureSession object
2.obtain AVCaptureDevicel Video recording equipment (camera) and recording equipment (microphone) shall not have the function of inputting data,It is only used to adjust the configuration of hardware devices.
3.According to audio/Video hardware equipment(AVCaptureDevice)Create audio/Video hardware input data object(AVCaptureDeviceInput),Dedicated to managing data entry.
4.Create video output data management object( AVCaptureVideoDataOutput),And set the sample cache agent(setSampleBufferDelegate)You can get the collected video data through it
5.Create audio output data management object( AVCaptureAudioDataOutput),And set the sample cache agent(setSampleBufferDelegate)You can get the collected audio data through it
6.Enter data into objects AVCaptureDeviceInput,Data output object AVCaptureOutput Add to media session management object AVCaptureSession in,It will automatically connect audio input and output and video input and output.
7.Create video preview layer AVCaptureVideoPreviewLayer And specify the media session and add layers to the display container layer in
8.start-up AVCaptureSession,The input-output data stream transmission will not start until it is turned on.
// Capture audio and video
- (void)setupCaputureVideo
{
    // 1. When creating a capture session, it must be strongly referenced, or it will be released
    AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
    _captureSession = captureSession;
    // 2. Obtain the camera device. The default is the rear camera
    AVCaptureDevice *videoDevice = [self getVideoDevice:AVCaptureDevicePositionFront];
    // 3. Get sound equipment
    AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    // 4. Create corresponding video device input object
    AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:nil];
    _currentVideoDeviceInput = videoDeviceInput;
    // 5. Create the corresponding audio device input object
    AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
    // 6. Add to session
    // Note: "it is best to judge whether input can be added, and the session cannot add empty input
    // 6.1 add video
    if ([captureSession canAddInput:videoDeviceInput]) {
        [captureSession addInput:videoDeviceInput];
    }
    // 6.2 adding audio
    if ([captureSession canAddInput:audioDeviceInput]) {
        [captureSession addInput:audioDeviceInput];
    }
    // 7. Obtain video data output equipment
    AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
    // 7.1 set up agent to capture video sample data
    // Note: the queue must be a serial queue to get data, and it cannot be empty
    dispatch_queue_t videoQueue = dispatch_queue_create("Video Capture Queue", DISPATCH_QUEUE_SERIAL);
    [videoOutput setSampleBufferDelegate:self queue:videoQueue];
    if ([captureSession canAddOutput:videoOutput]) {
        [captureSession addOutput:videoOutput];
    }
    // 8. Get audio data output device
    AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init];
    // 8.2 set up agent to capture video sample data
    // Note: the queue must be a serial queue to get data, and it cannot be empty
    dispatch_queue_t audioQueue = dispatch_queue_create("Audio Capture Queue", DISPATCH_QUEUE_SERIAL);
    [audioOutput setSampleBufferDelegate:self queue:audioQueue];
    if ([captureSession canAddOutput:audioOutput]) {
        [captureSession addOutput:audioOutput];
    }
    // 9. Obtain video input and output connections for distinguishing audio and video data
    _videoConnection = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
    // 10. Add video preview layer
    AVCaptureVideoPreviewLayer *previedLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
    previedLayer.frame = [UIScreen mainScreen].bounds;
    [self.view.layer insertSublayer:previedLayer atIndex:0];
    _previedLayer = previedLayer;
    // 11. Start session
    [captureSession startRunning];
}
// Specify the camera direction to get the camera
- (AVCaptureDevice *)getVideoDevice:(AVCaptureDevicePosition)position
{
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *device in devices) {
        if (device.position == position) {
            return device;
        }
    }
    return nil;
}
#pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate / / get the input device data, which may be audio or video
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    if (_videoConnection == connection) {
        NSLog(@"Video data collected");
    } else {
        NSLog(@"Audio data collected");
    }
}

Secondly: explain the additional function I of video capture in the live delivery system (switching cameras)
To switch cameras
1. Obtain the input object of the current video device
2. Judge whether the current video device is front or rear
3. Determine the direction of switching cameras
4. Obtain the corresponding camera equipment according to the camera direction
5. Create the corresponding camera input object
6. Remove the previous video input object from the session
7. Add a new video input object to the session.

// Switch camera
- (IBAction)toggleCapture:(id)sender {
    // Gets the current device direction
    AVCaptureDevicePosition curPosition = _currentVideoDeviceInput.device.position;    // Get the direction you need to change
    AVCaptureDevicePosition togglePosition = curPosition == AVCaptureDevicePositionFront?AVCaptureDevicePositionBack:AVCaptureDevicePositionFront;
    // Get changed camera device
    AVCaptureDevice *toggleDevice = [self getVideoDevice:togglePosition];
    // Get changed camera input device
    AVCaptureDeviceInput *toggleDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:toggleDevice error:nil];
    // Remove the previous camera input device
    [_captureSession removeInput:_currentVideoDeviceInput];
    // Add a new camera input device
    [_captureSession addInput:toggleDeviceInput];
    // Record current camera input device
    _currentVideoDeviceInput = toggleDeviceInput;
}

Video capture additional function 2 (focus cursor), focus cursor steps
1. Click on the monitor screen
2. Obtain the position of the clicked point and convert it to the point on the camera, which must be transferred through the AVCaptureVideoPreviewLayer
3. Set the position of the focus cursor picture and animate it
4. Set the focus mode and exposure mode of the camera device (Note: the lockForConfiguration must be locked here, otherwise an error will be reported)

// Click on the screen to bring up the focus view
- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event
{
    // Get click location
    UITouch *touch = [touches anyObject];
    CGPoint point = [touch locationInView:self.view];
    // Convert the current position to the position on the camera point
    CGPoint cameraPoint = [_previedLayer captureDevicePointOfInterestForPoint:point];
    // Set focus cursor position
    [self setFocusCursorWithPoint:point];
    // Set focus
    [self focusWithMode:AVCaptureFocusModeAutoFocus exposureMode:AVCaptureExposureModeAutoExpose atPoint:cameraPoint];
}
/**
 *  Set focus cursor position
 *
 *  @param point Cursor position
 */
-(void)setFocusCursorWithPoint:(CGPoint)point{
    self.focusCursorImageView.center=point;
self.focusCursorImageView.transform=CGAffineTransformMakeScale(1.5, 1.5);
    self.focusCursorImageView.alpha=1.0;
    [UIView animateWithDuration:1.0 animations:^{
  self.focusCursorImageView.transform=CGAffineTransformIdentity;
    } completion:^(BOOL finished) {
        self.focusCursorImageView.alpha=0;   
    }];
}
/**
 *  Set focus
 */
-(void)focusWithMode:(AVCaptureFocusMode)focusMode exposureMode:(AVCaptureExposureMode)exposureMode atPoint:(CGPoint)point{
    AVCaptureDevice *captureDevice = _currentVideoDeviceInput.device;
    // Lock configuration
    [captureDevice lockForConfiguration:nil];
    // Set focus
    if ([captureDevice isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {
        [captureDevice setFocusMode:AVCaptureFocusModeAutoFocus];
    }
    if ([captureDevice isFocusPointOfInterestSupported]) {
        [captureDevice setFocusPointOfInterest:point];
    }
    // Set exposure
    if ([captureDevice isExposureModeSupported:AVCaptureExposureModeAutoExpose]) {
        [captureDevice setExposureMode:AVCaptureExposureModeAutoExpose];
    }
    if ([captureDevice isExposurePointOfInterestSupported]) {
        [captureDevice setExposurePointOfInterest:point];
    }
    // Unlock configuration
    [captureDevice unlockForConfiguration];
}

Finally: explain the basic knowledge of AVFoundation used in live delivery system.
AVFoundation: AVFoundation framework is required for audio and video data acquisition.
AVCaptureDevice: hardware device, including microphone and camera. Through this object, you can set some properties of the physical device (such as camera focus, white balance, etc.)
AVCaptureDeviceInput: hardware input object. You can create a corresponding AVCaptureDeviceInput object based on AVCaptureDevice to manage hardware input data.
AVCaptureOutput: hardware output object, which is used to receive various output data. The corresponding subclasses AVCaptureAudioDataOutput (sound data output object) and AVCaptureVideoDataOutput (video data output object) are usually used
AVCaptionConnection: after adding an input and output to AVCaptureSession, AVCaptureSession will establish a connection between input and output devices, and the connection object can be obtained through AVCaptureOutput.
AVCaptureVideoPreviewLayer: camera shooting preview layer, which can view the shooting or video recording effect in real time. To create this object, you need to specify the corresponding AVCaptureSession object, because AVCaptureSession contains video input data, which can be displayed only with video data.
AVCaptureSession: coordinates the transfer of data between input and output
System function: it can operate hardware equipment
Working principle: a capture session is generated between the live delivery system and the mobile phone system, which is equivalent to that the live delivery system is connected with the hardware equipment. We only need to add the hardware input object and output object to the session, and the session will automatically connect the hardware input object and output, so that the hardware input and output equipment can transmit audio and video data.

Statement: This article is forwarded by cloudleopard technology from weixin_34270865 Blog, if there is infringement, please contact the author to delete

Topics: Front-end