This is part 1 of our blog on how we are building NeetoRecord, a Loom alternative. Here are part 2 and part 3.
At neeto, the product team, developers and the UI team often communicate using short videos and screen recordings. We relied on popular solutions like Loom and Bubbles. But they allowed only a small number of recordings in their free versions and Soon, they presented us with the upgraded screens - upgrades were quite expensive for our team due to our team size and the number of recordings we made daily.
So, we thought of building a solution of our own. We found the browser's MediaStream Recording API.
The MediaStream Recording API, sometimes called the MediaRecorder API, is closely affiliated with the Media Capture and Streams API and the WebRTC API. The MediaStream Recording API enables capturing the data generated by a MediaStream or HTMLMediaElement. Captured video data is in WebM format. We can play it back later using the HTMLVideoElement on any video player that supports WebM playback.
We will build a basic recorder that records the screen, audio from the microphone and then play it back. We will first look at different fragments of code for recording the screen, recording audio, playing back in the browser and then downloading the video file. At the end we will combine them together into a fully working web based screen recorder program.
let mediaRecorder;
let recordedChunks = [];
const stream = await navigator.mediaDevices.getDisplayMedia({
video: true,
});
mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = event => {
recordedChunks.push(event.data);
};
getDisplayMedia()
is provided by the
WebRTC (Web
Real-Time Communication) API. It captures the contents of the user's screen or
specific application windows. getDisplayMedia()
method prompts the user to
select and grant permission to capture the contents of a display or portion
thereof (such as a window) as a MediaStream.
There is a similar method called getUserMedia()
. It is typically used for
applications like video conferencing and live streaming. When you call
getUserMedia()
, the browser prompts the user for permission to access their
camera and microphone.
When there is recorded data available, ondataavailable()
callback is
triggered. We could process the data in this callback. In our case, we collect
the data by appending it to an array named recordedChunks
.
let audioStream = await window.navigator.mediaDevices.getUserMedia({
audio: { echoCancellation: true, noiseSuppression: true },
});
To capture audio, we use getUserMedia()
. It is typically used for applications
like video conferencing and live streaming. When you call getUserMedia()
, the
browser prompts the user for permission to access their camera and microphone.
But we want to capture only the audio, so we pass the audio
parameter.
The audio
key accepts a set of parameters that would let us control the
quality and properties of the captured audio stream. In our example, we have
enabled echoCancellation
and noiseSuppression
- two good features that would
enhance the quality of our screen recordings. The complete list of audio options
is available
here.
The audio stream could be composed of multiple audio tracks - the microphone,
system sounds, etc. We will add these tracks to the video stream we had
previously set up using getDisplayMedia()
.
audioStream.getAudioTracks().forEach(audioTrack => stream.addTrack(audioTrack));
We now have an array named recordedChunks
, which contains sequential chunks of
the recorded data. Video players need video data as a
Blob. A blob is a
file-like object of immutable, raw binary/text data. We must convert our
recordedChunks
array into a Blob
to be played back or written into a file.
To construct a Blob from other non-blob objects and data, we can use the
Blob()
constructor.
const blob = new Blob(recordedChunks, {
type: "video/webm",
});
Suppose we have an HTML video tag in our page.
<video id="recordedVideo" controls></video>
When the video recording is stopped, we could create an Object URL for our
recording blob
and attach it to the video player.
mediaRecorder.onstop = () => {
let recordedVideo = document.getElementById("recordedVideo");
recordedVideo.src = URL.createObjectURL(blob);
};
We can now play the recording on the HTML video player.
Similarly, we can create a download link using the Object URL for the recording
blob
.
let a = document.createElement("a");
let url = URL.createObjectURL(blob);
a.href = url;
We can now download and play the recording locally on any video player supporting WebM playback.
We have glued together the code fragments discussed above and created a demo for a basic web-based screen recorder.
You may view the source code here.
Now that we have a basic screen recorder in place, we have to consider the following:
We will cover these topics in the next set of blogs. Stay tuned.
If this blog was helpful, check out our full blog archive.