Starting a Stream with NF Encoder
Introduction
Live streaming has become an essential part of building a brand and engaging with audiences in real-time. Whether you're hosting webinars, live performances, or creating interactive content, implementing a robust live streaming solution is crucial. This guide will walk you through the process of broadcasting a livestream using Native Frame's powerful video encoding API.
Native Frame offers a flexible and easy-to-use platform that allows you to integrate live streaming capabilities into your applications quickly and efficiently. This guide is designed for developers who want to implement live streaming features using either React or pure JavaScript.
In this tutorial, we'll cover the key concepts of setting up an encoder, managing media streams, and initiating a broadcast. We'll explore two main approaches:
- Using React with Native Frame's component library
- Implementing Native Frame's API in a vanilla JavaScript environment
By the end of this guide, you'll have a solid understanding of how to create a basic live streaming setup using Native Frame, enabling you to build engaging, real-time video experiences for your users.
Prerequisites
Before we dive into the implementation, make sure you have the following:
- Basic knowledge of JavaScript and web development
- Familiarity with React + TypeScript (for the React implementation)
- A code editor of your choice
- Node.js and
npm
installed on your system- Minimum Node.js version: 18.17.0
- A Native Frame account and access to your domain (provided by Native Frame)
- Basic understanding of video streaming concepts
- Access to the Native Frame SDK
Follow this guide to get access to the Native Frame Encoder and Player SDKs.
Step-by-Step Instructions
React Implementation
Step 1: Set up your React project
First, make sure you have a React project set up. If you don't, you can create one using Create React App:
npx create-react-app my-livestream-app --template typescript
cd my-livestream-app
Step 2: Install Native Frame dependencies
Install the necessary Native Frame packages:
npm install @video/video-client-web
Next, let's install the uuid
package and it's TypeScript types:
npm install uuid
npm install --save-dev @types/uuid
Step 3. Create necessary utilities: Logger and TokenRefresher
// utils.ts
import { LoggerGlobal, LoggerCore } from "@video/log-client";
import { TokenRequest, types } from "@video/video-client-web";
import { backendEndpoint, authUrl } from "./utils";
/**
* LoggerGlobal (only need one per application)
* */
const loggerGlobal = new LoggerGlobal();
loggerGlobal.setOptions({
host: backendEndpoint,
interval: 5000,
level: "debug",
});
/**
* LoggerCore (only need one per application)
* */
export const logger = new LoggerCore("VDC-web:BasicDemo");
logger.setLoggerMeta("client", "VDC");
logger.setLoggerMeta("chain", "VideoClient");
logger.setLoggerAggregate("message", "sample message");
/**
* fetchToken
* */
export const fetchToken = async (
authUrl: string,
reqBody: TokenRequest
): Promise<string> => {
const response = await window.fetch(authUrl, {
method: "post",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(reqBody),
credentials: "include",
});
if (response.status > 299) {
throw new Error("Unable to get token");
}
const body = await response.json();
return body.token;
};
export type TokenRefresherOptions = {
backendEndpoint: string | null;
authUrl: string;
streamKey: string;
scope: string;
displayName?: string;
userId?: string;
clientReferrer?: string;
streamName?: string;
};
/**
* tokenRefresher
* */
export const tokenRefresher =
(options: TokenRefresherOptions): types.TokenGetter =>
//This needs to be asynchronous because the fetchToken method will need to do a **POST** request to the authentication API
async (): Promise<string> => {
const url = `${options.authUrl}`;
const mirrors = [];
if (
["broadcaster", "private-broadcaster"].includes(
options.scope
)
) {
mirrors.push({
id: options.streamKey,
streamName: options.streamName != null ? options.streamName : "demo",
kind: "rtmp",
clientEncoder: "demo",
streamKey: options.streamKey,
clientReferrer:
options.clientReferrer !== undefined ? options.clientReferrer : null,
});
}
let token: string;
try {
const fetchOptions = {
scopes: [options.scope],
userId: options.userId ?? options.streamKey,
data: {
displayName: options.displayName ?? options.streamKey,
mirrors,
},
};
token = await fetchToken(url, fetchOptions);
} catch (error) {
logger.error("Unable to get access token", {
errorMessage: getMessageFromError(error),
url,
});
throw error;
}
return token;
};
Step 4. Create your VideoClient instance via a custom React Hook
VideoClient
is a video-client class that connects to the server and enables either creating a call or joining
an existing call. Only one VideoClient
instance is required for each user and can live at the top of you application.
// useVideoClient.tsx
import { types, VideoClient } from "@video/video-client-web";
import { useEffect, useState } from "react";
import { tokenRefresher, authUrl, backendEndpoint, logger } from "./utils";
import { v4 as uuidv4 } from "uuid";
export function useVideoClient(): types.VideoClientAPI | null {
const [videoClient, setVideoClient] = useState<types.VideoClientAPI | null>(
null
);
useEffect(() => {
if (videoClient == null) {
const token = tokenRefresher({
backendEndpoint,
authUrl,
scope: "broadcaster",
streamKey: uuidv4(),
});
/**
* Setting the generated token and the backendEndpoint for
* the options to be passed to our new VideoClient instance
**/
const videoClientOptions: types.VideoClientOptions = {
backendEndpoints: [backendEndpoint],
token,
userId: "demo-user-id",
logger,
};
const vc = new VideoClient(videoClientOptions);
setVideoClient(vc);
}
return () => {
// Handle cleanup of VideoClient instance
if (videoClient != null) {
videoClient.dispose();
setVideoClient(null);
}
};
/*
* Remember to only include things in your dependency array
* that are related to the state of your `VideoClient` instance,
* otherwise disposal may occur at undesired times.
*/
}, [videoClient]);
return videoClient;
}
Step 5. Create your CallState via a custom React Hook
CallState
is used to start, stop, and join active broadcasts. It is required for use with the <JoinBroadcastButton/>
but can also be helpful if you choose to create your own component.
// useCallState.tsx
import { CallState } from "@video/video-client-web";
import { useEffect, useState } from "react";
export function useCallState(): CallState | null {
const [callState, setCallState] = useState<CallState | null>(null);
/*
* Create CallState.
*/
useEffect(() => {
if (callState == null) {
setCallState(new CallState());
}
return () => {
// Handle cleanup of CallState and Broadcast
if (callState) {
callState.stopBroadcast();
callState.call?.close("Closed by call state on unmount/re-render");
callState.dispose();
setCallState(null);
}
};
}, [callState]);
return callState;
}
Step 6. Create the MediaStreamController
andEncoderUiState
via a custom React Hook
The MediaStreamController
is an Encoder SDK class which controls accessing audio and video streams from your browser. This
gets passed into the EncoderUiState
constructor.
// useEncoderUi.tsx
import { EncoderUiState, mediaController } from "@video/video-client-web";
import { useEffect, useState } from "react";
export function useEncoderUi(): EncoderUiState | null {
const [encoderUi, setEncoderUi] = useState<EncoderUiState | null>(null);
/*
* Create MediaStreamController + EncoderUiState for Broadcaster.
*/
useEffect(() => {
if (encoderUi == null) {
(async () => {
await mediaController.init();
const mediaStreamController = await mediaController.requestController();
setEncoderUi(new EncoderUiState(mediaStreamController));
})();
}
return () => {
// Handle cleanup of mediaStreamController and EncoderUiState
if (encoderUi != null) {
encoderUi.mediaStreamController?.close(
"Closed by unmounting/re-render"
);
encoderUi.dispose("Component unmounting/re-render");
setEncoderUi(null);
}
};
}, [encoderUi]);
return encoderUi;
}
Step 7. Creating an Encoder Component
From the @video/video-client-web
package you'll need to import Providers and UI components to be used for the Encoder. These UI components are modular and can be easily added/removed.
// Encoder.tsx
import {
CameraButton,
ControlBar,
EncoderVideo,
JoinBroadcastButton,
MediaContainer,
MicrophoneButton,
EncoderAudioDeviceSelect,
EncoderResolutionSelect,
EncoderVideoDeviceSelect,
SettingsSidebar,
TestMicButton,
FullscreenButton,
SettingsButton,
VideoClientContext,
EncoderUiContext,
CallContext,
} from "@video/video-client-web";
import React from "react";
import { useVideoClient, useEncoderUi, useCallState } from "../../hooks";
function Encoder(): React.ReactElement {
/**
* Access `VideoClient`, `EncoderUiState`, and
* `CallState` from your custom hooks.
* */
const videoClient = useVideoClient();
const encoderUi = useEncoderUi();
const callState = useCallState();
/** NOTE: Do not interact with EncoderUiContext
* or VideoClientContext in this component, as it
* would be OUTSIDE of the EncoderUiProvider and
* VideoClientProvider. This component is only for rendering.
* */
return (
<VideoClientContext.Provider value={videoClient}>
<CallContext.Provider value={callState}>
<EncoderUiContext.Provider value={encoderUi}>
{encoderUi != null && callState != null && (
<>
{/* MediaContainer should wrap
ALL components for styling. */}
<MediaContainer>
<EncoderVideo />
{/* ControlBar wraps controls (for styling).
Include required variant prop */}
<ControlBar variant="encoder">
<JoinBroadcastButton
broadcastOptions={{
streamName: "demo",
}}
/>
<CameraButton />
<MicrophoneButton />
<FullscreenButton />
<SettingsButton />
</ControlBar>
{/* SettingsSidebar wraps items to be
displayed in sidebar (for styling). */}
<SettingsSidebar>
<div>
<EncoderVideoDeviceSelect />
<EncoderAudioDeviceSelect />
<EncoderResolutionSelect />
</div>
<TestMicButton />
</SettingsSidebar>
</MediaContainer>
</>
)}
</EncoderUiContext.Provider>
</CallContext.Provider>
</VideoClientContext.Provider>
);
}
export default Encoder;
Pure JavaScript Implementation
A live demo of the pure JavaScript implementation can be found here.
Configuration Options
When setting up your livestream with Native Frame, there are several configuration options you can adjust to customize the behavior and appearance of your encoder.
VideoClient Options
const videoClientOptions = {
backendEndpoints: ["https://yourdomain.com"],
token: "your-auth-token",
autoPlay: true,
stats: {
app: "your-app-name",
userId: "user-123",
streamId: "stream-456",
},
loggerConfig: {
clientName: "your-app-name",
writeLevel: "debug",
},
};
MediaStreamController Options
const mediaStreamControllerOptions = {
defaultConstraints: {
audio: { echoCancellation: true, noiseSuppression: true },
video: { width: 1280, height: 720, frameRate: 30 },
},
fallbackConstraints: {
audio: true,
video: true,
},
};
Broadcast Options
const broadcastOptions = {
streamName: "my-awesome-stream",
videoProducerOptions: {
encodings: [
{ maxBitrate: 100000, scaleResolutionDownBy: 4 },
{ maxBitrate: 300000, scaleResolutionDownBy: 2 },
{ maxBitrate: 900000, scaleResolutionDownBy: 1 },
],
},
audioProducerOptions: {
codecOptions: {
opusStereo: true,
opusDtx: true,
},
},
};
Troubleshooting
Here are some common issues and their solutions:
-
Unable to access camera or microphone: Ensure browser permissions are granted and no other application is using these devices.
-
Poor video quality: Check upload bandwidth and adjust video bitrate, resolution, or frame rate accordingly.
-
Audio echo or feedback: Enable echo cancellation and advise the broadcaster to use headphones.
-
Broadcast fails to start: Verify the authentication token and backend endpoints.
-
Viewers can't connect to the stream: Check if the
streamName
matches between broadcaster and viewers. -
High CPU usage: Lower encoding quality or switch to hardware encoding if available.
-
"WebRTC is not supported" error: Ensure users are on modern, up-to-date browsers that support WebRTC.
Implement comprehensive error handling and logging:
videoClient.on("error", (error) => {
console.error("VideoClient error:", error);
// Send error to your logging service
});
call.on("webrtcStats", (stats) => {
console.log("Stream health:", stats);
// Analyze stats and take action if needed
});
Best Practices
-
Optimize Initial Setup: Initialize VideoClient and MediaStreamController early.
-
Handle Network Conditions Gracefully: Implement adaptive bitrate streaming.
-
Implement Robust Error Handling: Use try-catch blocks for async operations and handle all relevant events.
-
Optimize Resource Usage: Release resources when no longer needed and use hardware acceleration when available.
-
Enhance User Experience: Provide clear feedback on stream status and implement a "test mode".
-
Secure Your Streams: Use short-lived, scoped tokens for authentication.
-
Monitor and Analyze: Implement comprehensive logging and use Native Frame's analytics features.
-
Optimize for Different Devices: Test on various devices and browsers, providing fallback options when necessary.
-
Keep Your Integration Updated: Regularly check for updates to the Native Frame SDK.
-
Design for Scalability: Implement proper load balancing and use CDNs for large audiences.
Related Resources
- Native Frame API Reference
- Getting Started with Native Frame
- Native Frame GitHub Repository
- WebRTC Fundamentals
- Video Encoding Best Practices
- WebRTC Troubleshooter
- React Documentation
Conclusion
Throughout this guide, we've explored the process of implementing Native Frame for livestreaming, covering both React and pure JavaScript approaches. Native Frame offers a versatile solution for integrating livestreaming capabilities into your application, with powerful features like low-latency WebRTC streaming and adaptive bitrate support.
As you move forward with your Native Frame implementation, consider the following steps:
- Start with a basic implementation and gradually add more advanced features.
- Conduct extensive testing across different devices, browsers, and network conditions.
- Utilize Native Frame's analytics and logging features to monitor performance.
- Stay updated with Native Frame's documentation and changelog.
- Engage with the Native Frame community for support and knowledge sharing.
- Continuously optimize based on real-world usage data and user feedback.
- Explore advanced features like multi-party streaming or custom video effects.