Skip to main content
RapidDev - Software Development Agency
bolt-ai-integrationsBolt Chat + API Route

How to Integrate Bolt.new with TensorFlow

TensorFlow.js is one of the best AI integrations for Bolt.new because it runs entirely in the browser — no API keys, no server costs, no authentication setup. Install @tensorflow/tfjs, load a pre-trained model from a URL, and run inference directly in the WebContainer preview. Image classification, pose detection, object detection, and text embedding all work without deploying. This is the only major AI framework that works fully client-side in Bolt's browser-based runtime.

What you'll learn

  • How TensorFlow.js runs machine learning models directly in Bolt's WebContainer browser environment without any server or API key
  • How to load and use pre-trained TensorFlow.js models for image classification, pose detection, and text analysis
  • How to build real-time ML inference demos — camera feed analysis, image uploads, live text embedding — in React
  • How to optimize TensorFlow.js performance using the WebGL backend for GPU acceleration
  • When to use TensorFlow.js versus server-side AI APIs like OpenAI or Google Cloud Vision
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate19 min read30 minutesAI/MLApril 2026RapidDev Engineering Team
TL;DR

TensorFlow.js is one of the best AI integrations for Bolt.new because it runs entirely in the browser — no API keys, no server costs, no authentication setup. Install @tensorflow/tfjs, load a pre-trained model from a URL, and run inference directly in the WebContainer preview. Image classification, pose detection, object detection, and text embedding all work without deploying. This is the only major AI framework that works fully client-side in Bolt's browser-based runtime.

Run Machine Learning Models in the Browser with TensorFlow.js

TensorFlow.js stands apart from every other AI integration in this guide because it runs entirely inside the browser — including inside Bolt's WebContainer. While OpenAI, Vertex AI, and other cloud AI services require server-side API routes to protect credentials and make HTTP calls, TensorFlow.js needs neither. The inference computation happens locally in the user's browser using WebGL for GPU acceleration. No API key, no monthly bill based on API calls, no latency from network round-trips to a remote server. For applications where privacy matters — processing medical images, personal photos, sensitive documents — keeping computation client-side is a meaningful security benefit.

The TensorFlow.js ecosystem provides dozens of pre-trained models available as npm packages or loadable from CDN URLs. MobileNet classifies images into 1,000 ImageNet categories with 70-80% accuracy. COCO-SSD detects and locates multiple objects in an image. PoseNet and BlazePose detect human body keypoints. Face-API.js identifies faces and expressions. The Universal Sentence Encoder converts text into 512-dimensional embeddings for semantic similarity. All of these are pure JavaScript with WASM or WebGL backends — fully compatible with WebContainers.

The trade-offs are important to understand. TensorFlow.js models are smaller and less capable than large language models running on GPU clusters. MobileNet is accurate for common objects but misses nuanced categories. Text embeddings from Universal Sentence Encoder work well for semantic search but can't generate text like GPT. Model files (5MB-30MB) are loaded from CDN on first use, adding startup latency. For applications requiring state-of-the-art accuracy or complex reasoning, server-side AI APIs are the better choice. TensorFlow.js excels when: you want zero ongoing API cost, real-time inference on a video stream, offline capability, or when processing data that shouldn't leave the browser.

Integration method

Bolt Chat + API Route

TensorFlow.js runs ML models directly in the browser using WebGL for GPU acceleration or WASM for CPU — no server-side component required for inference. Install @tensorflow/tfjs and a pre-trained model package, load the model from a CDN URL, and run predictions on user input entirely client-side. This is a perfect architectural match for Bolt's WebContainer: no TCP sockets, no native binaries, no API credentials — just JavaScript and WebAssembly running in the browser tab.

Prerequisites

  • A Bolt.new project (Vite or Next.js — TensorFlow.js works with both)
  • Basic understanding of machine learning concepts (model, inference, classification, confidence score) — you don't need to train models, just use pre-trained ones
  • Modern browser with WebGL support (all current Chrome, Firefox, Safari, Edge versions — check with chrome://gpu or about:gpu)
  • Sufficient browser memory for the model you want to run (MobileNet needs ~100MB RAM, larger models may need 500MB+)

Step-by-step guide

1

Install TensorFlow.js and Load Your First Pre-Trained Model

TensorFlow.js provides two sets of packages: the core library (@tensorflow/tfjs) and pre-trained model packages that are installed separately. The core library provides tensor math, automatic differentiation, and backend management. Pre-trained models are wrapper packages that handle model downloading, caching, and preprocessing — they expose a clean API like model.classify(image) or model.detect(image) without requiring you to understand tensor shapes or preprocessing pipelines. Install both the core and a model package together. When the component mounts, call the model's load() function — this triggers a CDN download of the model weights (5-30MB depending on the model). TensorFlow.js automatically caches the model in the browser's Cache API after the first download, so subsequent loads are nearly instant. The load() promise resolves when the model is ready — show a loading state while this happens. Model initialization with TensorFlow.js also involves detecting and initializing the backend. By default, it tries WebGL (GPU) first for performance, falling back to CPU if WebGL is unavailable. You can explicitly set the backend with tf.setBackend('webgl') before calling load(). In Bolt's WebContainer, WebGL is available in the preview window — TensorFlow.js will use GPU acceleration automatically. The first inference call after loading is slightly slower (the model must compile its WebGL shaders) — this is normal and subsequent calls are fast.

Bolt.new Prompt

Set up TensorFlow.js with MobileNet in my Bolt project. Install @tensorflow/tfjs and @tensorflow-models/mobilenet via npm. Create a React hook useMobileNet.ts in hooks/useMobileNet.ts that: loads the MobileNet model on mount using mobilenet.load(), tracks loading state and any errors, and exposes a classify(imageElement) function that returns the top 5 predictions as { className: string, probability: number }[]. Export the hook with loading, error, and classify. Show a loading skeleton in any component using this hook while the model loads.

Paste this in Bolt.new chat

hooks/useMobileNet.ts
1// hooks/useMobileNet.ts
2import { useState, useEffect, useCallback, useRef } from 'react';
3import * as tf from '@tensorflow/tfjs';
4import * as mobilenet from '@tensorflow-models/mobilenet';
5
6interface Prediction {
7 className: string;
8 probability: number;
9}
10
11export function useMobileNet() {
12 const modelRef = useRef<mobilenet.MobileNet | null>(null);
13 const [loading, setLoading] = useState(true);
14 const [error, setError] = useState<string | null>(null);
15
16 useEffect(() => {
17 let cancelled = false;
18
19 async function loadModel() {
20 try {
21 // Prefer WebGL backend for GPU acceleration
22 await tf.setBackend('webgl');
23 const model = await mobilenet.load({ version: 2, alpha: 1.0 });
24 if (!cancelled) {
25 modelRef.current = model;
26 setLoading(false);
27 }
28 } catch (err) {
29 if (!cancelled) {
30 setError(err instanceof Error ? err.message : 'Failed to load model');
31 setLoading(false);
32 }
33 }
34 }
35
36 loadModel();
37 return () => { cancelled = true; };
38 }, []);
39
40 const classify = useCallback(
41 async (imageElement: HTMLImageElement | HTMLCanvasElement | HTMLVideoElement): Promise<Prediction[]> => {
42 if (!modelRef.current) throw new Error('Model not loaded');
43 const predictions = await modelRef.current.classify(imageElement, 5);
44 return predictions;
45 },
46 []
47 );
48
49 return { loading, error, classify };
50}

Pro tip: TensorFlow.js model files are cached in the browser after the first download. During development in Bolt's WebContainer, the model downloads fresh each session. In production, users experience the full download only once — subsequent visits use the browser cache for near-instant model loading.

Expected result: The useMobileNet hook loads the MobileNet v2 model from CDN when the component mounts. The loading state is false once the model is ready. Calling classify() with an image element returns 5 predictions with class names and confidence scores.

2

Build an Image Classifier with File Upload

Creating a working image classifier requires connecting a file input to an HTML image element that TensorFlow.js can process, then calling the model and displaying results. The key implementation detail is that TensorFlow.js's mobilenet.classify() accepts an HTMLImageElement, HTMLCanvasElement, or HTMLVideoElement — not a File object or base64 string. Your React component must convert the uploaded file to an object URL, set it as the src of an HTML img element, wait for the image to load (the onLoad event), and only then call classify(). Trying to classify before the image has fully loaded will produce incorrect results or errors. The useMobileNet hook's classify function returns predictions sorted by probability (highest first) — access them as an array. Display confidence as a percentage: (prediction.probability * 100).toFixed(1) + '%'. For visual polish, show a colored confidence bar where green is 60%+, yellow is 30-60%, and red is below 30% — this helps users quickly interpret prediction quality. Important: after classifying, clean up the object URL using URL.revokeObjectURL() to avoid memory leaks — each createObjectURL() call allocates browser memory that's only freed when you explicitly revoke the URL or close the tab. Wrap the img element in a ref to access it directly from your classify callback without relying on DOM queries.

Bolt.new Prompt

Build an ImageClassifier React component that uses the useMobileNet hook. Include: a drag-and-drop file input that accepts images, a preview of the uploaded image, a Classify button that calls the hook's classify() function, and a results section showing the top 5 predictions as a list with class name and confidence bar. The confidence bar should be colored green (>60%), yellow (30-60%), or red (<30%). Show the MobileNet loading state with a skeleton and show 'Analyzing...' during classification. Handle image cleanup with URL.revokeObjectURL() when a new file is selected.

Paste this in Bolt.new chat

components/ImageClassifier.tsx
1// components/ImageClassifier.tsx
2'use client';
3import { useState, useRef, useCallback } from 'react';
4import { useMobileNet } from '@/hooks/useMobileNet';
5
6interface Prediction { className: string; probability: number; }
7
8export function ImageClassifier() {
9 const { loading: modelLoading, error: modelError, classify } = useMobileNet();
10 const [imageUrl, setImageUrl] = useState<string | null>(null);
11 const [predictions, setPredictions] = useState<Prediction[]>([]);
12 const [classifying, setClassifying] = useState(false);
13 const imgRef = useRef<HTMLImageElement>(null);
14 const prevUrlRef = useRef<string | null>(null);
15
16 const handleFileChange = useCallback((file: File) => {
17 if (prevUrlRef.current) URL.revokeObjectURL(prevUrlRef.current);
18 const url = URL.createObjectURL(file);
19 prevUrlRef.current = url;
20 setImageUrl(url);
21 setPredictions([]);
22 }, []);
23
24 const handleClassify = useCallback(async () => {
25 if (!imgRef.current || !imageUrl) return;
26 setClassifying(true);
27 try {
28 const results = await classify(imgRef.current);
29 setPredictions(results);
30 } finally {
31 setClassifying(false);
32 }
33 }, [classify, imageUrl]);
34
35 const confidenceColor = (p: number) =>
36 p >= 0.6 ? 'bg-green-500' : p >= 0.3 ? 'bg-yellow-500' : 'bg-red-400';
37
38 if (modelLoading) return <div className="p-6 text-center text-gray-500">Loading MobileNet model...</div>;
39 if (modelError) return <div className="p-6 text-red-600">Model error: {modelError}</div>;
40
41 return (
42 <div className="max-w-lg mx-auto p-6 space-y-4">
43 <h2 className="text-2xl font-bold">Image Classifier</h2>
44 <p className="text-sm text-gray-500">Powered by TensorFlow.js MobileNet runs entirely in your browser</p>
45
46 <label className="block border-2 border-dashed rounded-lg p-6 text-center cursor-pointer hover:bg-gray-50">
47 <span className="text-gray-600">Click to upload or drag an image here</span>
48 <input type="file" accept="image/*" className="hidden"
49 onChange={(e) => e.target.files?.[0] && handleFileChange(e.target.files[0])} />
50 </label>
51
52 {imageUrl && (
53 <>
54 <img ref={imgRef} src={imageUrl} alt="To classify"
55 className="w-full rounded-lg max-h-64 object-contain" />
56 <button onClick={handleClassify} disabled={classifying}
57 className="w-full bg-blue-600 text-white py-2 rounded hover:bg-blue-700 disabled:opacity-50">
58 {classifying ? 'Analyzing...' : 'Classify Image'}
59 </button>
60 </>
61 )}
62
63 {predictions.length > 0 && (
64 <div className="space-y-2">
65 <h3 className="font-semibold">Results</h3>
66 {predictions.map((p) => (
67 <div key={p.className}>
68 <div className="flex justify-between text-sm mb-1">
69 <span>{p.className}</span>
70 <span>{(p.probability * 100).toFixed(1)}%</span>
71 </div>
72 <div className="w-full bg-gray-200 rounded-full h-2">
73 <div className={`h-2 rounded-full ${confidenceColor(p.probability)}`}
74 style={{ width: `${p.probability * 100}%` }} />
75 </div>
76 </div>
77 ))}
78 </div>
79 )}
80 </div>
81 );
82}

Pro tip: TensorFlow.js classification runs entirely in the browser — no network request leaves the tab after the initial model download. This means the classification works offline and is instantaneous (typically 20-100ms per image on a modern GPU). You can test it in Bolt's WebContainer preview with no deployment needed.

Expected result: Users can upload an image and click 'Classify Image' to see top-5 MobileNet predictions with confidence bars, all running locally in the browser with no API calls.

3

Build Real-Time Object Detection with Camera Feed

Real-time object detection requires connecting TensorFlow.js to a live video stream from the user's webcam. The COCO-SSD model (Common Objects in Context - Single Shot Detector) can detect 80 object types in real time: people, cars, animals, furniture, food, electronics, and more. The architecture uses two HTML elements side by side: a video element that displays the camera feed, and a canvas element overlaid on top where bounding boxes are drawn. A requestAnimationFrame loop runs continuously, calling model.detect(videoElement) on each frame and redrawing the canvas with detection results. COCO-SSD returns an array of predictions: each prediction has class (the object name), score (confidence 0-1), and bbox ([x, y, width, height] in pixels relative to the video dimensions). Drawing requires scaling the bbox coordinates to match the actual displayed canvas size — the video's intrinsic dimensions may differ from its displayed size. Use videoElement.videoWidth/videoHeight for the actual video dimensions. The getUserMedia API request requires HTTPS in production — in Bolt's WebContainer preview, it works on localhost equivalents. In the deployed app, ensure your hosting provides HTTPS (Netlify and Bolt Cloud both do). Users will see a browser permission dialog asking for camera access on first use — handle the case where they deny permission with a graceful error message.

Bolt.new Prompt

Build a real-time object detection component using TensorFlow.js COCO-SSD. Install @tensorflow-models/coco-ssd. Create an ObjectDetector React component with: a Start Camera button that requests webcam access using getUserMedia, a video element showing the live feed, a canvas overlaid on the video for drawing bounding boxes, and a requestAnimationFrame loop that calls cocoSsd.detect(videoRef.current) every frame and draws colored bounding boxes with class labels and confidence scores on the canvas. Show a list of currently detected objects below the video. Include a Stop Camera button that stops the stream. Handle camera permission denial with a friendly error.

Paste this in Bolt.new chat

components/ObjectDetector.tsx
1// components/ObjectDetector.tsx
2'use client';
3import { useState, useRef, useEffect, useCallback } from 'react';
4import * as tf from '@tensorflow/tfjs';
5import * as cocoSsd from '@tensorflow-models/coco-ssd';
6
7const COLORS = ['#EF4444','#3B82F6','#10B981','#F59E0B','#8B5CF6','#EC4899'];
8
9export function ObjectDetector() {
10 const videoRef = useRef<HTMLVideoElement>(null);
11 const canvasRef = useRef<HTMLCanvasElement>(null);
12 const modelRef = useRef<cocoSsd.ObjectDetection | null>(null);
13 const rafRef = useRef<number>(0);
14 const streamRef = useRef<MediaStream | null>(null);
15
16 const [status, setStatus] = useState<'idle' | 'loading' | 'running' | 'error'>('idle');
17 const [detections, setDetections] = useState<cocoSsd.DetectedObject[]>([]);
18 const [errorMsg, setErrorMsg] = useState('');
19
20 const drawDetections = useCallback((preds: cocoSsd.DetectedObject[]) => {
21 const canvas = canvasRef.current;
22 const video = videoRef.current;
23 if (!canvas || !video) return;
24 const ctx = canvas.getContext('2d')!;
25 canvas.width = video.videoWidth;
26 canvas.height = video.videoHeight;
27 ctx.clearRect(0, 0, canvas.width, canvas.height);
28
29 preds.forEach((pred, i) => {
30 const [x, y, w, h] = pred.bbox;
31 const color = COLORS[i % COLORS.length];
32 ctx.strokeStyle = color;
33 ctx.lineWidth = 3;
34 ctx.strokeRect(x, y, w, h);
35 ctx.fillStyle = color;
36 ctx.fillRect(x, y - 22, w, 22);
37 ctx.fillStyle = '#fff';
38 ctx.font = '14px sans-serif';
39 ctx.fillText(`${pred.class} ${(pred.score * 100).toFixed(0)}%`, x + 4, y - 6);
40 });
41 }, []);
42
43 const detect = useCallback(async () => {
44 if (!modelRef.current || !videoRef.current || videoRef.current.readyState < 2) {
45 rafRef.current = requestAnimationFrame(detect);
46 return;
47 }
48 const preds = await modelRef.current.detect(videoRef.current);
49 drawDetections(preds);
50 setDetections(preds);
51 rafRef.current = requestAnimationFrame(detect);
52 }, [drawDetections]);
53
54 const startCamera = useCallback(async () => {
55 setStatus('loading');
56 try {
57 await tf.setBackend('webgl');
58 modelRef.current = await cocoSsd.load();
59 const stream = await navigator.mediaDevices.getUserMedia({ video: true });
60 streamRef.current = stream;
61 videoRef.current!.srcObject = stream;
62 await videoRef.current!.play();
63 setStatus('running');
64 rafRef.current = requestAnimationFrame(detect);
65 } catch (err) {
66 setStatus('error');
67 setErrorMsg(err instanceof Error ? err.message : 'Camera or model error');
68 }
69 }, [detect]);
70
71 const stopCamera = useCallback(() => {
72 cancelAnimationFrame(rafRef.current);
73 streamRef.current?.getTracks().forEach((t) => t.stop());
74 streamRef.current = null;
75 setStatus('idle');
76 setDetections([]);
77 }, []);
78
79 useEffect(() => () => stopCamera(), [stopCamera]);
80
81 return (
82 <div className="max-w-2xl mx-auto p-6 space-y-4">
83 <h2 className="text-2xl font-bold">Real-Time Object Detection</h2>
84 <p className="text-sm text-gray-500">TensorFlow.js COCO-SSD runs in your browser, no API key needed</p>
85
86 <div className="relative bg-black rounded-lg overflow-hidden" style={{ aspectRatio: '16/9' }}>
87 <video ref={videoRef} className="w-full h-full object-cover" muted playsInline />
88 <canvas ref={canvasRef} className="absolute inset-0 w-full h-full" />
89 {status === 'idle' && (
90 <div className="absolute inset-0 flex items-center justify-center">
91 <button onClick={startCamera} className="bg-blue-600 text-white px-6 py-3 rounded-lg text-lg">
92 Start Camera
93 </button>
94 </div>
95 )}
96 {status === 'loading' && (
97 <div className="absolute inset-0 flex items-center justify-center bg-black/50">
98 <p className="text-white">Loading COCO-SSD model...</p>
99 </div>
100 )}
101 </div>
102
103 {status === 'error' && <p className="text-red-600">Error: {errorMsg}</p>}
104
105 {status === 'running' && (
106 <button onClick={stopCamera} className="w-full border border-red-300 text-red-600 py-2 rounded hover:bg-red-50">
107 Stop Camera
108 </button>
109 )}
110
111 {detections.length > 0 && (
112 <div>
113 <h3 className="font-semibold mb-2">Detected ({detections.length})</h3>
114 <div className="flex flex-wrap gap-2">
115 {detections.map((d, i) => (
116 <span key={i} className="px-3 py-1 bg-blue-100 text-blue-800 rounded-full text-sm">
117 {d.class} {(d.score * 100).toFixed(0)}%
118 </span>
119 ))}
120 </div>
121 </div>
122 )}
123 </div>
124 );
125}

Pro tip: The COCO-SSD model file (~25MB) is cached after the first download. Real-time detection typically runs at 10-30 FPS on a modern laptop using the WebGL backend. On slower devices, increase the detection interval from every frame to every 5 frames using a frame counter to maintain smooth video without dropping the camera feed.

Expected result: Clicking 'Start Camera' requests webcam permission, loads COCO-SSD, and begins real-time object detection with colored bounding boxes drawn on the canvas overlay. Detected objects appear as tags below the video.

4

Add Custom Model Loading from URL or File

Beyond the pre-trained model packages, TensorFlow.js can load any model in TensorFlow SavedModel format (converted to TF.js format) or the native TFJS layers format from a URL. This enables you to use custom models trained in Python TensorFlow/Keras, models from TensorFlow Hub, or models from Hugging Face's TFJS model files — all without a server. Custom TFJS models are described by a model.json file plus one or more shard files (group1-shard1of1.bin, etc.). Host these on any static file server, CDN, or directly in your Bolt project's public/ folder. Load with tf.loadLayersModel('path/to/model.json') for Keras Sequential and Functional models, or tf.loadGraphModel('path/to/model.json') for SavedModel-style conversions. If the model was trained in Python, convert it using the tensorflowjs_converter CLI tool (pip install tensorflowjs, then tensorflowjs_converter --input_format=tf_saved_model ./saved_model ./tfjs_model). Custom model inference requires knowing your model's input shape and output format. Preprocess inputs to match the training data's normalization: if training images were normalized to [-1, 1], divide by 127.5 and subtract 1. The tf.tidy() function is critical for custom model inference — it automatically disposes intermediate tensors and prevents memory leaks in long-running TF.js applications. Without tf.tidy(), each inference call creates tensors that accumulate until the browser runs out of WebGL memory. Even pre-trained model packages benefit from wrapping their calls in tf.tidy() for long sessions.

Bolt.new Prompt

Add a custom model loader to my TensorFlow.js app. Create a component CustomModelRunner that: loads a TF.js model from a URL entered by the user (using tf.loadLayersModel(url)), shows model input/output shape information after loading, accepts a text input for entering feature values as comma-separated numbers, runs prediction on submit, and displays the raw output tensor as formatted JSON. Wrap all inference in tf.tidy() to prevent memory leaks. Show memory stats (tf.memory()) below the results.

Paste this in Bolt.new chat

components/CustomModelRunner.tsx
1// components/CustomModelRunner.tsx
2'use client';
3import { useState, useRef } from 'react';
4import * as tf from '@tensorflow/tfjs';
5
6export function CustomModelRunner() {
7 const modelRef = useRef<tf.LayersModel | null>(null);
8 const [modelUrl, setModelUrl] = useState('');
9 const [featureInput, setFeatureInput] = useState('');
10 const [modelInfo, setModelInfo] = useState<string | null>(null);
11 const [prediction, setPrediction] = useState<number[] | null>(null);
12 const [memStats, setMemStats] = useState<tf.MemoryInfo | null>(null);
13 const [loading, setLoading] = useState(false);
14
15 const loadModel = async () => {
16 if (!modelUrl) return;
17 setLoading(true);
18 try {
19 await tf.setBackend('webgl');
20 const model = await tf.loadLayersModel(modelUrl);
21 modelRef.current = model;
22 const inputShape = model.inputs[0].shape.join(' x ');
23 const outputShape = model.outputs[0].shape.join(' x ');
24 setModelInfo(`Input: [${inputShape}] | Output: [${outputShape}]`);
25 } catch (err) {
26 setModelInfo(`Load error: ${err instanceof Error ? err.message : 'Failed'}`);
27 } finally {
28 setLoading(false);
29 }
30 };
31
32 const runPrediction = () => {
33 if (!modelRef.current || !featureInput) return;
34 try {
35 const features = featureInput.split(',').map(Number);
36 const inputShape = modelRef.current.inputs[0].shape.slice(1) as number[];
37 const result = tf.tidy(() => {
38 const tensor = tf.tensor2d([features], [1, ...inputShape]);
39 const output = modelRef.current!.predict(tensor) as tf.Tensor;
40 return Array.from(output.dataSync());
41 });
42 setPrediction(result);
43 setMemStats(tf.memory());
44 } catch (err) {
45 setPrediction(null);
46 setModelInfo(`Prediction error: ${err instanceof Error ? err.message : 'Failed'}`);
47 }
48 };
49
50 return (
51 <div className="max-w-lg mx-auto p-6 space-y-4">
52 <h2 className="text-2xl font-bold">Custom TF.js Model</h2>
53 <div className="flex gap-2">
54 <input value={modelUrl} onChange={(e) => setModelUrl(e.target.value)}
55 placeholder="https://example.com/model.json" className="flex-1 border rounded px-3 py-2" />
56 <button onClick={loadModel} disabled={loading}
57 className="bg-blue-600 text-white px-4 py-2 rounded disabled:opacity-50">
58 {loading ? 'Loading...' : 'Load'}
59 </button>
60 </div>
61 {modelInfo && <p className="text-sm font-mono bg-gray-100 p-2 rounded">{modelInfo}</p>}
62 <input value={featureInput} onChange={(e) => setFeatureInput(e.target.value)}
63 placeholder="Feature values, comma-separated (e.g. 0.5, 1.2, 3.0)" className="w-full border rounded px-3 py-2" />
64 <button onClick={runPrediction} disabled={!modelRef.current}
65 className="w-full bg-green-600 text-white py-2 rounded disabled:opacity-50">
66 Run Prediction
67 </button>
68 {prediction && (
69 <div className="bg-gray-50 rounded p-3">
70 <p className="font-semibold text-sm">Output:</p>
71 <pre className="text-xs mt-1">{JSON.stringify(prediction, null, 2)}</pre>
72 {memStats && <p className="text-xs text-gray-500 mt-2">Tensors: {memStats.numTensors} | GPU bytes: {(memStats.numBytesInGPU ?? 0).toLocaleString()}</p>}
73 </div>
74 )}
75 </div>
76 );
77}

Pro tip: Always wrap TensorFlow.js inference code in tf.tidy() to automatically clean up intermediate tensors. Without it, each prediction call leaks WebGL memory — in a real-time detection loop this causes the browser to slow down and eventually crash after a few hundred inferences.

Expected result: Users can paste a URL to any TFJS model.json file, load it, enter comma-separated feature values, and run predictions. Model input/output shapes are displayed, and memory usage is shown after each prediction.

Common use cases

Real-Time Object Detection with Webcam

Build a live object detection demo that accesses the user's webcam and identifies objects in real time using the COCO-SSD model. Display bounding boxes and labels over the video feed. Runs entirely in the browser with no server calls — works in Bolt's WebContainer preview and in production without any backend.

Bolt.new Prompt

Build a real-time object detection app using TensorFlow.js and the COCO-SSD model. Install @tensorflow/tfjs and @tensorflow-models/coco-ssd. Create a React component with a webcam feed (using the browser's getUserMedia API) and an overlaid canvas. Every 200ms, run coco-ssd detection on the current video frame and draw bounding boxes with class labels and confidence scores on the canvas. Show a status message during model loading ('Loading model...'). Add a list below the video showing all currently detected objects and their confidence scores. Everything runs in the browser — no API keys needed.

Copy this prompt to try it in Bolt.new

Image Classification with File Upload

Build an image classification tool where users upload photos and the MobileNet model identifies what's in them from 1,000 ImageNet categories. Show the top 5 predictions with confidence bars. Users can analyze photos of objects, animals, plants, and more without any API costs — the model runs locally in their browser.

Bolt.new Prompt

Build an image classifier using TensorFlow.js MobileNet. Install @tensorflow/tfjs and @tensorflow-models/mobilenet. Create a React page with: a file upload button that accepts images, a preview of the uploaded image, and a results section showing the top 5 MobileNet predictions with class name and confidence as a percentage bar. Load MobileNet model using mobilenet.load() when the component mounts and show a loading indicator. When a user uploads an image, convert it to an HTMLImageElement and run model.classify(imgElement, 5) to get predictions. Show results instantly — everything runs client-side with no server or API key.

Copy this prompt to try it in Bolt.new

Semantic Text Search with Sentence Embeddings

Build a semantic search feature using Universal Sentence Encoder to convert text into vector embeddings and find similar content. Users type a search query; the app computes cosine similarity between the query embedding and pre-embedded documents, returning semantically similar results even when keywords don't match. Runs entirely client-side.

Bolt.new Prompt

Build a semantic search demo using TensorFlow.js Universal Sentence Encoder. Install @tensorflow/tfjs and @tensorflow-models/universal-sentence-encoder. Create a list of 10 example documents (sentences about different topics). When the app loads, use the USE model to embed all documents. When a user types in a search box, embed the query and compute cosine similarity against all document embeddings using tf.matMul. Show the top 3 most similar documents ranked by similarity score. Add a similarity score badge (0-100%) next to each result. Everything should run in the browser — no server required.

Copy this prompt to try it in Bolt.new

Troubleshooting

Model loading fails with 'Failed to fetch' or CORS error when loading from a URL

Cause: The server hosting your model files does not include CORS headers (Access-Control-Allow-Origin: *). TensorFlow.js loads models using fetch(), which enforces CORS. Third-party CDNs or servers without CORS configuration will fail.

Solution: Host model files on a CORS-enabled server. Options: (1) Place them in your Bolt project's public/ folder and load from '/models/model.json' — same-origin requests don't have CORS restrictions. (2) Use TensorFlow Hub (tfhub.dev) which supports CORS. (3) Use a CDN like jsDelivr or unpkg that adds CORS headers. (4) Copy model files into your project and import them.

typescript
1// Load from your own project's public folder — no CORS issues
2const model = await tf.loadLayersModel('/models/my-model/model.json');
3
4// Or from TensorFlow Hub (CORS-enabled)
5const mobileNet = await tf.loadLayersModel(
6 'https://tfhub.dev/google/tfjs-model/imagenet/mobilenet_v2_100_224/feature_vector/2/default/1',
7 { fromTFHub: true }
8);

WebGL backend initialization fails or model runs very slowly in Bolt's WebContainer preview

Cause: WebGL may be disabled or limited in the WebContainer's browser context due to browser security settings, virtual GPU in a VM, or browser extensions (hardware acceleration disabled). Without WebGL, TensorFlow.js falls back to CPU computation, which is 10-50x slower.

Solution: Explicitly set and verify the backend after initialization. If WebGL is unavailable, fall back to WASM for better CPU performance than the pure JavaScript backend. Install @tensorflow/tfjs-backend-wasm for the WASM fallback.

typescript
1import * as tf from '@tensorflow/tfjs';
2import '@tensorflow/tfjs-backend-wasm';
3
4try {
5 await tf.setBackend('webgl');
6 await tf.ready();
7 console.log('Using WebGL backend:', tf.getBackend());
8} catch {
9 console.log('WebGL unavailable, falling back to WASM');
10 await tf.setBackend('wasm');
11 await tf.ready();
12}

Memory usage grows continuously during real-time detection until the browser crashes

Cause: TensorFlow.js tensors created during inference are not being disposed. Each call to model.predict() or model.detect() creates intermediate tensors that must be explicitly cleaned up. Without disposal, tensors accumulate in WebGL memory.

Solution: Wrap all inference code in tf.tidy() — this automatically disposes all tensors created inside the callback except the returned value. For tensors you explicitly create, call .dispose() after use. Monitor memory with tf.memory().numTensors to verify cleanup.

typescript
1// Correct: use tf.tidy() to auto-dispose intermediate tensors
2const result = tf.tidy(() => {
3 const input = tf.tensor2d(features, [1, features.length]);
4 const output = model.predict(input) as tf.Tensor;
5 return Array.from(output.dataSync());
6 // input and output tensors are disposed automatically
7});
8
9// Wrong: tensors leak
10const input = tf.tensor2d(features, [1, features.length]); // never disposed
11const output = model.predict(input) as tf.Tensor; // never disposed

Best practices

  • Load TensorFlow.js models on component mount (not on demand) so they're ready when users interact — show a loading indicator during the initial model download and use the browser cache for subsequent visits
  • Always wrap TensorFlow.js inference code in tf.tidy() to prevent WebGL memory leaks — this is the single most important performance practice for long-running TF.js applications
  • Use tf.setBackend('webgl') explicitly before loading models to ensure GPU acceleration rather than relying on auto-detection, which may default to CPU in some environments
  • For real-time video detection, limit inference to every 3-5 frames (using a frame counter) rather than every requestAnimationFrame call — this maintains smooth video at 60fps while still running detection at 10-20fps
  • Choose the right model size for your use case — MobileNet V2 is fast and small; EfficientNet is more accurate but slower. Pre-trained COCO-SSD is sufficient for most object detection needs without custom training
  • Host model files in your project's public/ folder or on a CORS-enabled CDN to avoid cross-origin fetch failures when loading model weights
  • Use TensorFlow.js for use cases where privacy matters — medical image processing, personal photos, documents — since computation stays in the user's browser and data never leaves their device

Alternatives

Frequently asked questions

Does TensorFlow.js work in Bolt.new's WebContainer?

Yes — TensorFlow.js is one of the best AI integrations for Bolt.new because it runs entirely in the browser using WebGL (GPU) or WASM (CPU). Unlike other AI services that require server-side API routes, TensorFlow.js needs no credentials, no API calls, and no deployment to test. You can run image classification, object detection, and text analysis fully in the WebContainer preview.

What pre-trained TensorFlow.js models are available?

The @tensorflow-models/* package family provides: MobileNet (image classification, 1,000 categories), COCO-SSD (object detection, 80 objects), PoseNet and BlazePose (human body keypoints), Face-API.js (face detection and expression recognition), Universal Sentence Encoder (text embeddings for semantic search), Toxicity (text content classification), and Handpose (hand skeleton detection). All are installed via npm and load model weights from CDN on first use.

What is the difference between TensorFlow.js and calling the OpenAI API from Bolt?

TensorFlow.js runs inference in the browser — no API key, no server cost, works offline, and data never leaves the user's device. OpenAI's API runs powerful GPT models on OpenAI's servers — requires a secret API key (server-side only), charges per token, requires internet connectivity, and supports far more complex language understanding. Use TensorFlow.js for vision tasks, semantic search, or privacy-sensitive applications. Use OpenAI for natural language generation, reasoning, and complex question answering.

How do I prevent memory leaks in TensorFlow.js?

Wrap all inference code in tf.tidy() — it automatically disposes intermediate tensors created inside the callback. For tensors you create explicitly, call .dispose() when done. Monitor memory with tf.memory().numTensors to check for leaks. Real-time detection loops without tf.tidy() will exhaust WebGL memory within minutes and cause the browser to slow down or crash.

Can TensorFlow.js use GPU acceleration in Bolt.new?

Yes — TensorFlow.js uses the WebGL backend for GPU acceleration in browsers, which works in Bolt's WebContainer preview. Call tf.setBackend('webgl') before loading models to ensure GPU acceleration. You can verify the active backend with tf.getBackend(). On laptops and desktops with discrete GPUs, WebGL inference is typically 5-20x faster than CPU.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.