Creating blurred or virtual backgrounds in real-time video in React apps

Farhan CK

Farhan CK

November 5, 2024

Creating blurred or virtual backgrounds in real-time video in React apps

Modern tools like Zoom and Google Meet allow us to blur or completely replace our background in real-time video, creating a polished and distraction-free environment regardless of where we are.

This is possible because of advancements in machine learning. In this blog, we'll explore how to achieve real-time background blurring and replacement using TensorFlow's body segmentation capabilities.

Tensorflow body segmentation

TensorFlow body segmentation is a computer vision technique that involves dividing an image into distinct regions corresponding to different parts of a human body. It typically employs deep learning models, such as convolutional neural networks (CNNs), to analyze an image and predict pixel-level labels. These labels indicate whether each pixel belongs to a specific body part, like the head, torso, arms, or legs.

The segmentation process often starts with a pre-trained model, which has been trained on large datasets. The model processes the input image through multiple layers of convolutions and pooling, gradually refining the segmentation map. The final output is a precise mask that outlines each body part, allowing for applications in areas like augmented reality, fitness tracking, and virtual try-ons.

To learn more about Tensorflow and body segmentation, check out the below resources.

Setting up React app

We'll create a simple React app that streams video from the webcam.

1import React, { useRef, useEffect } from "react";
2
3const App = () => {
4  const videoRef = useRef(null);
5
6  useEffect(() => {
7    const getVideo = async () => {
8      try {
9        const stream = await navigator.mediaDevices.getUserMedia({
10          video: true,
11        });
12        if (videoRef.current) {
13          videoRef.current.srcObject = stream;
14        }
15      } catch (err) {
16        console.error("Error accessing webcam: ", err);
17      }
18    }
19
20    getVideo();
21
22    return () => {
23      if (videoRef.current && videoRef.current.srcObject) {
24        videoRef.current.srcObject.getTracks().forEach(track => track.stop());
25      }
26    };
27  }, []);
28
29  return (
30    <div>
31      <video ref={videoRef} autoPlay width="640" height="480" style={transform: 'scaleX(-1)'}/>
32    </div>
33  );
34}
35
36export default App;

In the code above, we render a <video> element, and once the app is mounted, we obtain the video stream from the user's webcam using navigator.mediaDevices.getUserMedia. This call will prompt the user to grant permission to access their camera. Once the user grants permission, the video stream is captured and rendered in the <video> element.

Installing packages

Next, let's add the necessary TensorFlow packages.

1yarn add @tensorflow/tfjs-core @tensorflow/tfjs-converter @tensorflow-models/body-segmentation @mediapipe/selfie_segmentation

@tensorflow/tfjs-core is the core JavaScript package for TensorFlow, @tensorflow-models/body-segmentation contains all the functions we need for body segmentation, and @mediapipe/selfie_segmentation is our pre-trained model.

Creating body segmenter

The TensorFlow body segmentation package provides a pre-trained MediaPipeSelfieSegmentation model for segmenting the human body in images and videos. This model is specifically designed for the upper body. If our requirement involves the entire body, we may want to consider other models like BodyPix.

We need to load this model to create a segmenter;

1import * as bodySegmentation from "@tensorflow-models/body-segmentation";
2
3const createSegmenter = async () => {
4  const model = bodySegmentation.SupportedModels.MediaPipeSelfieSegmentation;
5  const segmenterConfig = {
6    runtime: "mediapipe",
7    solutionPath: "https://cdn.jsdelivr.net/npm/@mediapipe/selfie_segmentation",
8    modelType: "general",
9  };
10  return bodySegmentation.createSegmenter(model, segmenterConfig);
11};

We load the model from a CDN, configure the runtime as mediapipe, and set the modelType to general. Then, we create the segmenter using the bodySegmentation.createSegmenter method.

1// ./videoBackground.js
2import * as bodySegmentation from "@tensorflow-models/body-segmentation";
3
4const createSegmenter = async () => {
5  const model = bodySegmentation.SupportedModels.MediaPipeSelfieSegmentation;
6  const segmenterConfig = {
7    runtime: "mediapipe",
8    solutionPath: "https://cdn.jsdelivr.net/npm/@mediapipe/selfie_segmentation",
9    modelType: "general",
10  };
11  return bodySegmentation.createSegmenter(model, segmenterConfig);
12};
13
14class VideoBackground {
15  #segmenter;
16
17  getSegmenter = async () => {
18    if (!this.#segmenter) {
19      this.#segmenter = await createSegmenter();
20    }
21    return this.#segmenter;
22  };
23}
24
25const videoBackground = new VideoBackground();
26export default videoBackground;

Here, we define a VideoBackground class and create an instance of it. Inside the class, the getSegmenter function ensures that the segmenter is created only once, so we don't have to recreate it each time.

Blur the video background

Before we continue further, let's update our demo app. Since we are going to modify the video, we need a <canvas/> to display the modified video. Add that to our demo app.

1// rest of the code...
2const App = () => {
3  const canvasRef = useRef();
4  // rest of the code...
5  return (
6    <div>
7      <video
8        ref={videoRef}
9        autoPlay
10        width="640"
11        height="480"
12        style={{ display: "none" }}
13      />
14      <canvas ref={canvasRef} width="640" height="480" style={transform: 'scaleX(-1)'}/>
15    </div>
16  );
17}

Also, hide the <video> element by setting display: "none" since we don't want to display the raw video.

Next, create a function within the VideoBackground class to blur the video.

1// rest of the code...
2class VideoBackground {
3  // rest of the code...
4
5  #animationId;
6  stop = () => {
7    cancelAnimationFrame(this.#animationId);
8  };
9
10  blur = async (canvas, video) => {
11    const foregroundThreshold = 0.5;
12    const edgeBlurAmount = 15;
13    const flipHorizontal = false;
14    const blurAmount = 5;
15    const segmenter = await this.getSegmenter();
16
17    const processFrame = async () => {
18      const segmentation = await segmenter.segmentPeople(video);
19      await bodySegmentation.drawBokehEffect(
20        canvas,
21        video,
22        segmentation,
23        foregroundThreshold,
24        blurAmount,
25        edgeBlurAmount,
26        flipHorizontal
27      );
28      this.#animationId = requestAnimationFrame(processFrame);
29    };
30    this.#animationId = requestAnimationFrame(processFrame);
31  };
32}

The blur function takes video and canvas references. It uses requestAnimationFrame to continuously draw the resulting image onto the canvas. First, it creates a body segmentation using the segmenter.segmentPeople function by passing the video reference. This allows us to identify which pixels belong to the background and foreground.

To achieve the blurred effect, we use the bodySegmentation.drawBokehEffect function, which applies a blur to the background pixels. This function accepts additional configurations like foregroundThreshold, blurAmount, and edgeBlurAmount, which we can adjust to customize the effect.

We've also added a stop function to halt video processing by canceling the recursive requestAnimationFrame calls.

1import React, { useRef, useEffect, useState } from "react";
2
3function App() {
4  const [cameraReady, setCameraReady] = useState(false);
5  // rest of the code...
6
7  <video
8    // rest of the code...
9    onLoadedMetadata={() => setCameraReady(true)}
10  />;
11  // rest of the code...
12}

Before calling the blur function, ensure the video is loaded by waiting for the onLoadedMetadata event to be triggered.

All set; let's blur the video background.

1import React, { useRef, useEffect, useState } from "react";
2
3import videoBackground from "./videoBackground";
4
5function App() {
6  const [cameraReady, setCameraReady] = useState(false);
7  const videoRef = useRef(null);
8  const canvasRef = useRef();
9
10  useEffect(() => {
11    async function getVideo() {
12      try {
13        const stream = await navigator.mediaDevices.getUserMedia({
14          video: true,
15        });
16        if (videoRef.current) {
17          videoRef.current.srcObject = stream;
18        }
19      } catch (err) {
20        console.error("Error accessing webcam: ", err);
21      }
22    }
23
24    getVideo();
25
26    return () => {
27      if (videoRef.current && videoRef.current.srcObject) {
28        videoRef.current.srcObject.getTracks().forEach(track => track.stop());
29      }
30    };
31  }, []);
32
33  useEffect(() => {
34    if (!cameraReady) return;
35    videoBackground.blur(canvasRef.current, videoRef.current);
36    return () => {
37      videoBackground.stop();
38    };
39  }, [cameraReady]);
40
41  return (
42    <div className="App">
43      <video
44        ref={videoRef}
45        autoPlay
46        width="640"
47        height="480"
48        style={{ display: "none" }}
49        onLoadedMetadata={() => setCameraReady(true)}
50      />
51      <canvas ref={canvasRef} width="640" height="480" />
52    </div>
53  );
54}
55
56export default App;

Here, we added another useEffect that triggers when cameraReady is true. Inside this useEffect, we call the videoBackground.blur function, passing the canvas and video refs. When the component unmounts, we stop the video processing by calling the videoBackground.stop() function.

Replace with a virtual background

If we feel that just blurring is not enough and want to completely replace the background, we need to remove the background from the video and place an <img/> behind the <canvas/>. To remove the background, we can utilize the bodySegmentation.toBinaryMask function. This function will return an ImageData with its alpha channel being 255 for the background and 0 for the foreground. We can use this info in the original data and set the background pixels' alpha to be transparent.

1// rest of the code...
2class VideoBackground {
3  // rest of the code...
4
5  remove = async (canvas, video) => {
6    const context = canvas.getContext("2d");
7    const segmenter = await this.getSegmenter();
8    const processFrame = async () => {
9      context.drawImage(video, 0, 0);
10      const segmentation = await segmenter.segmentPeople(video);
11      const coloredPartImage = await bodySegmentation.toBinaryMask(
12        segmentation
13      );
14      const imageData = context.getImageData(
15        0,
16        0,
17        video.videoWidth,
18        video.videoHeight
19      );
20      // imageData format; [R,G,B,A,R,G,B,A...]
21      // below for loop iterate through alpha channel
22      for (let i = 3; i < imageData.data.length; i += 4) {
23        // Background pixel's alpha will be 255.
24        if (coloredPartImage.data[i] === 255) {
25          imageData.data[i] = 0; // this is a background pixel's alpha. Make it fully transparent
26        }
27      }
28      await bodySegmentation.drawMask(canvas, imageData);
29      this.#animationId = requestAnimationFrame(processFrame);
30    };
31    this.#animationId = requestAnimationFrame(processFrame);
32  };
33}

Similar to the blurring process, inside processFrame, we first create the segmentation using segmenter.segmentPeople and convert it to a binary mask using bodySegmentation.toBinaryMask. We then obtain the original image data with context.getImageData. Next, we loop through the image data to make the background pixels transparent. Finally, we draw the result on the canvas using bodySegmentation.drawMask.

Before calling this function, let's modify our demo app by adding an option to switch between none, blur, and image effects, rather than removing the blur function. Additionally, include a background image.

1const BACKGROUND_OPTIONS = ["none", "blur", "image"];
2function App() {
3  const [backgroundType, setBackgroundType] = useState(BACKGROUND_OPTIONS[0]);
4  // rest of the code...
5
6  return (
7    <div>
8      // rest of the code...
9      {backgroundType === "image" && (
10        <img
11          style={{
12            position: "absolute",
13            top: 0,
14            bottom: 0,
15            width: "640px",
16            height: "480px",
17          }}
18          src="/bgImage.png"
19        />
20      )}
21      // rest of the code...
22      <div>
23        <select
24          value={backgroundType}
25          onChange={e => setBackgroundType(e.target.value)}
26        >
27          {BACKGROUND_OPTIONS.map(option => (
28            <option value={option} key={option}>
29              {option}
30            </option>
31          ))}
32        </select>
33      </div>
34    </div>
35  );
36}

Here, we added a <select> element to choose between none, blur, and image, and an <img> element to display the background image, which will serve as our virtual background.

All set. Now, let's update the useEffect.

1useEffect(() => {
2  if (!cameraReady || backgroundType === "none") return;
3
4  const bgFn =
5    backgroundType === "blur" ? videoBackground.blur : videoBackground.remove;
6
7  bgFn(canvasRef.current, videoRef.current);
8
9  return () => {
10    videoBackground.stop();
11  };
12}, [cameraReady, backgroundType]);

Based on the selection, we will call either videoBackground.blur or videoBackground.remove.

Full working example can be found in this Github repo.

If this blog was helpful, check out our full blog archive.

Stay up to date with our blogs.

Subscribe to receive email notifications for new blog posts.