Amazon AWS Kinesis Video Streams with WebRTC demo example

Posted by xconspirisist on Thu, 23 Dec 2021 04:02:28 +0100

title: Amazon AWS Kinesis Video Streams with WebRTC demo example

categories:[Linux C]

tags: [Amazon cloud platform]

date: 2021/12/22

< div align = 'right' > Author: Hackett < / div >

<div align ='right'> WeChat official account: overtime apes </div>

The following step-by-step instructions describe how to download, build, and run the Kinesis Video Streams with WebRTC development kit and its corresponding examples.

1. Download the Kinesis Video Streams with WebRTC development kit in C

Run the following command:

 git clone --recursive https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c.git

2. Compile and build Kinesis Video Streams with WebRTC

Complete the following steps:

  1. Install cmake:

    • Run brew install cmake PKG config SRTP on macOS
    • Run sudo apt get install PKG config cmake libcap2 libcap dev on Ubuntu
  2. Get the access key and secret key of the AWS account you want to use for this presentation.
  3. Run the following command to create a build directory in the webrtc development kit you downloaded and execute cmake from it:

    mkdir -p amazon-kinesis-video-streams-webrtc-sdk-c/build
    cd amazon-kinesis-video-streams-webrtc-sdk-c/build
    cmake ..     // You may fail when cmake here. You'd better bring your own ladder
  4. Now that you are in the build directory, you have just used the above steps to create and run make to build the webrtc development kit and the examples provided.

    be careful:

    These areas are: kvsWebrtcClientMasterGstSample will not be built if gstreamer is not installed on the system. To ensure that it is built (on macOS), you must run: brew install gstreamer GST plugins base GST plugins good

3. Example of running the WebRTC development kit in C

After completing step 2, the following demo application will be generated in the build directory:

  • kvsWebrtcClientMaster - this application sends sample H264/Opus frames through signaling channels (paths: / samples/h264SampleFrames and / samples/opusSampleFrames). It also accepts incoming audio (if enabled in the browser). When checked in the browser, it prints metadata of audio packets received in the terminal.
  • kvsWebrtcClientViewer - this application accepts sample H264/Opus frames and prints them.
  • kvsWebrtcClientMasterGstSample - this application sends sample H264/Opus frames from the GStreamer pipeline.

To run these demo applications, complete the following steps:

  1. Use AWS account voucher to set up your environment: (AWS account voucher needs to be obtained from your account)

    export AWS_ACCESS_KEY_ID= <Your AWS account access Key>
    export AWS_SECRET_ACCESS_KEY= <AWS account secret key>
    export AWS_KVS_CACERT_PATH= <Full path of your cert.pem file. It is typically available in the certs directory inside
    Kinesis-video-webrtc-native-build/certs/cert.pm>
  2. Run either application by passing the name you want to provide to the signaling channel to the sample application. The application creates a signaling channel with the name you provide. For example, to create a signaling channel named myChannel and start sending sample H264/Opus frames through this channel, run the following command:

    ./kvsWebrtcClientMaster myChannel

    When the command line application prints Connection established, you can continue to the next step.

  3. Now that your signaling channel has been created and the connected master is streaming media to it, you can view this stream. For example, you can view this live stream in a Web application. To do this, use Test the page with Kinesis Video Streams with WebRTC Open the WebRTC development kit test page and set the following values using the same AWS credentials and the same signaling channel you specified for the above master device:

    • Access key ID
    • Secret access key
    • Signaling channel name
    • Client ID (optional)

    Select Start viewer to start real-time video streaming of sample H264/Opus frames.

You can choose to view this stream on the web page / Android and IOS apps with WebRTC integrated.

Summary:

A signaling channel can only have one master device

A signaling channel can have up to 10 connected viewers

Data interaction after connection:

VOID onDataChannelMessage(UINT64 customData, PRtcDataChannel pDataChannel, BOOL isBinary, PBYTE pMessage, UINT32 pMessageLen){   //connect
    UNUSED_PARAM(customData);
    UNUSED_PARAM(pDataChannel);
    char *pTopicName = NULL,*pStrAnswer = NULL;
    uint32_t nAnswerLen = 0;
    if (isBinary) {
        DLOGI("DataChannel Binary Message");
    } else {
        DLOGI("DataChannel String Message: %.*s\n", pMessageLen, pMessage);
    }
    printf("DataChannel String Message: %s \n MessageLen = %d\n", pMessage ,pMessageLen);
    dataChannelSend(pDataChannel,isBinary,pStrAnswer,nAnswerLen);     // Send data back
}

VOID onDataChannel(UINT64 customData, PRtcDataChannel pRtcDataChannel){
    DLOGI("New DataChannel has been opened %s \n", pRtcDataChannel->name);
    printf("pRtcDataChannel->name : %s\n", pRtcDataChannel->name);

    dataChannelOnMessage(pRtcDataChannel, customData, onDataChannelMessage);    // Callback function receiving data
}

If you think the article is good, you can give a "three links", and the article is synchronized to the personal WeChat official account.

I'm hackett. I'll see you next time

Topics: C Linux