To use MongoDB with Bolt.new, you cannot use the standard mongoose or mongodb native drivers — they require TCP sockets which WebContainers don't support. The solution is MongoDB Atlas Data API (HTTP-based REST endpoints) for development in Bolt's preview. Once you export and deploy to Netlify or Vercel, standard mongoose works normally. Alternatively, Supabase (PostgreSQL via HTTP) requires no workarounds.
The MongoDB WebContainer Problem — and How to Solve It
MongoDB is the world's most popular NoSQL database, but connecting to it from Bolt.new requires understanding a fundamental WebContainer constraint. The standard mongoose ORM and the official mongodb Node.js driver both establish TCP socket connections to your MongoDB Atlas cluster. Bolt's WebContainer — a Node.js environment running inside a browser tab — cannot open raw TCP sockets. When you try to install mongoose and connect, you'll see a timeout error or a 'MongoNetworkError: connect ETIMEOUT' because the connection attempt fails at the networking layer.
MongoDB Atlas provides a solution: the Data API, which exposes your database through HTTP REST endpoints instead of TCP. Every CRUD operation becomes a standard fetch() call with JSON request/response bodies. This HTTP-based approach works perfectly inside Bolt's WebContainer. The trade-off is a more verbose API compared to mongoose's elegant model-based interface — instead of User.find({age: {$gt: 18}}), you make a POST request to the Atlas Data API findMany endpoint with a filter object.
This dev-vs-deploy distinction is important to understand: the TCP limitation applies only during development in Bolt's WebContainer. Once you export your project and deploy to Netlify, Vercel, or any real server, the full mongoose driver works normally via TCP. So you have two options: implement the Data API for development and keep it for production (simpler), or implement the Data API for development and migrate to mongoose after deployment (more powerful, better TypeScript types). Alternatively, if you don't have an existing MongoDB dataset to migrate, Supabase provides PostgreSQL with an HTTP-based client that works seamlessly in Bolt without any workarounds.
Integration method
Standard MongoDB drivers (mongoose, mongodb) use TCP sockets and fail in Bolt's WebContainer during development. MongoDB Atlas provides an HTTP-based Data API that works in WebContainers through standard fetch calls. You create a Next.js API route that proxies Data API requests, keeping your Atlas API key server-side. After deployment, you can optionally migrate to the standard mongoose driver since deployed servers support TCP connections.
Prerequisites
- A MongoDB Atlas account with a free M0 cluster (the free tier supports the Data API)
- The MongoDB Atlas Data API enabled in your cluster settings (Atlas UI → Data API tab)
- An Atlas Data API key generated from the Data API settings page
- Your Atlas cluster's App ID (shown in the Data API settings, format: data-abcde)
- A Bolt.new project using Next.js for the API routes that will proxy Data API requests
Step-by-step guide
Enable MongoDB Atlas Data API
Enable MongoDB Atlas Data API
The Atlas Data API is not enabled by default. You need to activate it for your cluster before you can make HTTP requests. Log into MongoDB Atlas at cloud.mongodb.com. In the left sidebar, click 'Data API' under the Services section. On the Data API page, click 'Enable the Data API.' Choose which data sources (clusters) to enable it for and select your cluster. Once enabled, you'll see your App Services URL — this is the base URL for all Data API requests, in the format https://data.mongodb-api.com/app/{App-ID}/endpoint/data/v1. Next, create an API key by clicking 'Create API Key.' Give it a name (like 'bolt-dev') and copy the key immediately — it's shown only once. Store both your App ID and API Key somewhere safe. Note the exact database and collection names you plan to query, as the Data API requires them in every request.
Pro tip: The MongoDB Atlas free tier (M0) supports the Data API with up to 1 million reads and 500K writes per day — more than enough for development and most production apps.
Expected result: The Atlas Data API is enabled. You have your App Services URL (containing your App ID) and an API key. You can see your data source linked to the Data API in the Atlas dashboard.
Configure Environment Variables
Configure Environment Variables
Store your Atlas Data API credentials in your Bolt project's .env file. These are server-side credentials that must never be exposed to the browser. In your Next.js project, environment variables without a NEXT_PUBLIC_ prefix are only accessible in API routes and server components — they never get bundled into client JavaScript. You'll need three variables: the Data API URL (which includes your App ID), your API key, and the database name. The collection name can be hardcoded in your API routes or stored as an additional environment variable if you have multiple collections. After adding these to .env, prompt Bolt to create a MongoDB utility module that initializes a reusable API client — this prevents you from repeating the Atlas URL and API key in every API route.
Create a .env file with MONGODB_DATA_API_URL, MONGODB_API_KEY, and MONGODB_DATABASE_NAME variables. Then create a lib/mongodb.ts utility file that exports a mongoDBClient object with methods for findMany, findOne, insertOne, updateOne, and deleteOne. Each method should call the Atlas Data API using fetch with the correct headers and body format. Include TypeScript generics so the methods return typed results.
Paste this in Bolt.new chat
1# .env2MONGODB_DATA_API_URL=https://data.mongodb-api.com/app/data-abcde/endpoint/data/v13MONGODB_API_KEY=your-atlas-api-key-here4MONGODB_DATABASE_NAME=your-database-nameExpected result: The .env file exists with your real Atlas credentials. The lib/mongodb.ts utility module compiles without TypeScript errors.
Build the MongoDB Data API Client
Build the MongoDB Data API Client
The Atlas Data API communicates via JSON over HTTPS. Each operation (find, insert, update, delete) corresponds to a specific HTTP endpoint. Every request includes your API key in the apiKey header, and the request body specifies the data source, database, and collection. The response is JSON containing a document or documents array. Building a typed TypeScript client that wraps these HTTP calls makes your API routes much cleaner and reusable across your application. The client you build here works identically in the Bolt WebContainer preview and in production — it's just fetch calls, nothing environment-specific.
1// lib/mongodb.ts2const DATA_API_URL = process.env.MONGODB_DATA_API_URL!;3const API_KEY = process.env.MONGODB_API_KEY!;4const DATABASE = process.env.MONGODB_DATABASE_NAME!;56const headers = {7 'Content-Type': 'application/json',8 'apiKey': API_KEY,9};1011async function dataApiRequest<T>(12 action: string,13 collection: string,14 body: Record<string, unknown>15): Promise<T> {16 const response = await fetch(`${DATA_API_URL}/action/${action}`, {17 method: 'POST',18 headers,19 body: JSON.stringify({20 dataSource: 'Cluster0', // your cluster name21 database: DATABASE,22 collection,23 ...body,24 }),25 });26 if (!response.ok) {27 const error = await response.text();28 throw new Error(`MongoDB Data API error: ${error}`);29 }30 return response.json();31}3233export const mongodb = {34 async findMany<T>(collection: string, filter = {}, options: { sort?: Record<string, 1 | -1>; limit?: number } = {}): Promise<T[]> {35 const result = await dataApiRequest<{ documents: T[] }>('find', collection, {36 filter,37 sort: options.sort,38 limit: options.limit,39 });40 return result.documents;41 },4243 async findOne<T>(collection: string, filter: Record<string, unknown>): Promise<T | null> {44 const result = await dataApiRequest<{ document: T | null }>('findOne', collection, { filter });45 return result.document;46 },4748 async insertOne<T>(collection: string, document: Record<string, unknown>): Promise<{ insertedId: string }> {49 return dataApiRequest('insertOne', collection, { document });50 },5152 async updateOne(53 collection: string,54 filter: Record<string, unknown>,55 update: Record<string, unknown>56 ): Promise<{ matchedCount: number; modifiedCount: number }> {57 return dataApiRequest('updateOne', collection, { filter, update });58 },5960 async deleteOne(collection: string, filter: Record<string, unknown>): Promise<{ deletedCount: number }> {61 return dataApiRequest('deleteOne', collection, { filter });62 },63};Pro tip: Replace 'Cluster0' with your actual cluster name — it's visible in the Atlas dashboard and in the Data API settings page.
Expected result: The lib/mongodb.ts file compiles without errors. The typed methods are available for use in API routes with IDE autocomplete.
Create CRUD API Routes
Create CRUD API Routes
Now build the API routes that your React frontend will call. Each route uses the mongodb utility from lib/mongodb.ts to perform database operations. Because the utility handles the Atlas Data API HTTP calls internally, your route code looks clean and focused on business logic. The routes run server-side within Bolt's WebContainer Node.js process, so they can access environment variables and make outbound HTTP calls to Atlas without any CORS restrictions. Your React components on the client side call these API routes using normal fetch requests to relative URLs like /api/products. This two-layer architecture (client → your API route → Atlas Data API) properly keeps credentials server-side and gives you a place to add authentication, validation, and business logic.
Using the mongodb utility from lib/mongodb.ts, create a Next.js API route at app/api/items/route.ts with GET (list all items, with optional ?category= query param filter) and POST (create new item) handlers. Also create app/api/items/[id]/route.ts with GET (single item by _id), PUT (update item), and DELETE handlers. Use proper HTTP status codes and error handling.
Paste this in Bolt.new chat
1// app/api/items/route.ts2import { NextResponse } from 'next/server';3import { mongodb } from '@/lib/mongodb';45export async function GET(request: Request) {6 try {7 const { searchParams } = new URL(request.url);8 const category = searchParams.get('category');9 const filter = category ? { category } : {};1011 const items = await mongodb.findMany('items', filter, {12 sort: { createdAt: -1 },13 limit: 100,14 });15 return NextResponse.json({ items });16 } catch (error) {17 return NextResponse.json({ error: 'Failed to fetch items' }, { status: 500 });18 }19}2021export async function POST(request: Request) {22 try {23 const body = await request.json();24 const result = await mongodb.insertOne('items', {25 ...body,26 createdAt: new Date().toISOString(),27 });28 return NextResponse.json(result, { status: 201 });29 } catch (error) {30 return NextResponse.json({ error: 'Failed to create item' }, { status: 500 });31 }32}Pro tip: The Atlas Data API uses string ObjectIds, not BSON ObjectId types. When filtering by _id, pass it as a string: { _id: { $oid: id } } using the EJSON format.
Expected result: GET /api/items returns a JSON array of documents from your MongoDB collection. POST /api/items with a JSON body creates a new document and returns the insertedId.
After Deployment: Optional Migration to Mongoose
After Deployment: Optional Migration to Mongoose
Once your app is deployed to a real server (Netlify serverless functions, Vercel serverless functions, or a Node.js server), you're no longer constrained by the WebContainer's TCP limitation. You can optionally migrate from the Atlas Data API to mongoose for a richer development experience: schema validation, middleware hooks, populate for references, a cleaner query API, and better TypeScript integration through typed schemas. The migration involves installing mongoose, creating model files, and replacing the Data API fetch calls in lib/mongodb.ts with mongoose model methods. Your API routes barely change since they're already abstracted behind the mongodb utility. Whether to migrate is a product decision: if your Data API implementation is working well in production, there's no urgent need to change it. Migrate if you need features like mongoose middleware, complex aggregations, or you're onboarding other developers who know mongoose better.
1// lib/mongoose.ts — alternative to Data API for deployed environments2import mongoose from 'mongoose';34const MONGODB_URI = process.env.MONGODB_URI!; // Full connection string with TCP56if (!MONGODB_URI) {7 throw new Error('Please define MONGODB_URI in .env');8}910let cached = global.mongoose;11if (!cached) {12 cached = global.mongoose = { conn: null, promise: null };13}1415export async function connectDB() {16 if (cached.conn) return cached.conn;17 cached.promise = mongoose.connect(MONGODB_URI);18 cached.conn = await cached.promise;19 return cached.conn;20}Pro tip: Only use this mongoose connection approach in deployed environments. The standard mongoose TCP driver will timeout and fail in Bolt's WebContainer during development.
Expected result: After deploying and adding the MONGODB_URI connection string to your hosting environment variables, mongoose connects successfully. Your existing MongoDB data is accessible through mongoose models with full type safety.
Common use cases
Migrating an Existing MongoDB App to Bolt
You have an existing application with MongoDB Atlas as its database and want to rebuild the frontend in Bolt.new while keeping your existing data. The Atlas Data API lets you connect to your existing cluster during Bolt development, query your existing collections, and prototype the new UI without changing your database structure.
I have a MongoDB Atlas database with a collection called 'products' containing documents with fields: name (string), price (number), category (string), inStock (boolean), and description (string). Create an API route at /api/products that uses the MongoDB Atlas Data API to fetch all products with optional filtering by category. Also create a products listing page that displays the products in a grid with category filter buttons.
Copy this prompt to try it in Bolt.new
Flexible Schema Content Management
Build a content management system where different content types have different fields — MongoDB's schema-less documents are perfect for this. Blog posts, portfolio items, and case studies can all live in one collection with different fields without database migrations.
Create a simple CMS with a MongoDB Atlas backend using the Data API. The app should have two sections: an admin page where you can create content items with a title, content type (dropdown: blog-post, portfolio-item, case-study), and a flexible JSON fields section for type-specific data. Use /api/content as the API route wrapping Atlas Data API calls. Show a public-facing page that renders each content type differently based on its type field.
Copy this prompt to try it in Bolt.new
Real-Time Activity Feed
Store user activity events as MongoDB documents with timestamps and event metadata. MongoDB's document model is ideal for storing heterogeneous event data where each event type has different fields. Query recent events efficiently using Atlas Data API's sorting and limiting capabilities.
Add a user activity feed to my app using MongoDB Atlas. Create an /api/activity API route that can POST new activity events (with fields: userId, eventType, metadata object, timestamp) and GET recent activities for a user sorted by timestamp descending, limited to 50. Activity types include: 'login', 'purchase', 'profile-update', 'content-view'. Show the activity feed in a scrollable list with icons for each event type.
Copy this prompt to try it in Bolt.new
Troubleshooting
MongoNetworkError: connect ETIMEOUT or connection refused when using mongoose in Bolt
Cause: The mongoose and mongodb native drivers use TCP sockets to connect to Atlas. Bolt's WebContainer cannot open raw TCP connections — this is a fundamental architectural constraint, not a configuration error.
Solution: Switch to the MongoDB Atlas Data API (HTTP-based) for development in Bolt. The Data API is the only supported way to use MongoDB during Bolt development. Alternatively, switch to Supabase which provides HTTP-based PostgreSQL access with zero configuration required in Bolt.
Atlas Data API returns 401 Unauthorized: 'Invalid API key'
Cause: The MONGODB_API_KEY environment variable is incorrect, expired, or not being passed in the API key header.
Solution: Verify the API key in Atlas → Data API → API Keys. Make sure the header in your fetch calls uses the exact key name 'apiKey' (not 'Authorization' or 'x-api-key'). Regenerate the key if needed and update your .env file. Restart the Bolt dev server after editing .env.
1// Correct header format for Atlas Data API:2const headers = {3 'Content-Type': 'application/json',4 'apiKey': process.env.MONGODB_API_KEY!, // exact header name: 'apiKey'5};Data API returns 'cannot filter on _id' or ObjectId-related errors
Cause: MongoDB's _id field is a BSON ObjectId, not a plain string. The Data API requires EJSON extended JSON format to handle ObjectId values in filters.
Solution: When filtering by _id, wrap the ID value in an $oid object using EJSON format. Do not pass the raw string ID directly in the _id field.
1// Wrong:2const item = await mongodb.findOne('items', { _id: id });34// Correct — use EJSON $oid for ObjectId filtering:5const item = await mongodb.findOne('items', { _id: { $oid: id } });Webhooks from MongoDB Atlas triggers never arrive during Bolt development
Cause: Atlas Database Triggers can call HTTP endpoints when documents change. Bolt's WebContainer has no public URL, so Atlas cannot send trigger events to your dev environment.
Solution: Deploy your app first, then configure Atlas Database Triggers to call your deployed endpoint URL. Use polling (periodic /api/sync calls) during development to simulate trigger behavior, then switch to actual Atlas Triggers after deployment.
Best practices
- Use the Atlas Data API only for Bolt development — it has higher latency than native drivers (150-300ms per request vs 5-30ms for TCP). After deployment, evaluate migrating to mongoose for better performance
- Centralize all Atlas Data API calls in lib/mongodb.ts rather than writing raw fetch calls in each API route — this makes a future migration to mongoose much easier
- Always validate and sanitize user inputs before passing them to MongoDB queries to prevent NoSQL injection attacks
- Use MongoDB Atlas's IP Access List to restrict database access to only your deployed server IPs after going live — remove the 0.0.0.0/0 open access used during development
- Consider Supabase (PostgreSQL) instead of MongoDB for new projects in Bolt — it has native HTTP support via PostgREST, better TypeScript integration, and no Data API workarounds needed
- Store only the Atlas Data API credentials (URL + API key) in environment variables, not your full MongoDB connection string — the connection string is only needed for post-deployment mongoose migration
- Use MongoDB Atlas's built-in search indexes for full-text search rather than regex queries — regex over the Data API is slow and expensive
- Handle ObjectId conversion consistently using the $oid EJSON format for all _id field filters to avoid type mismatch errors
Alternatives
Firestore is also a document database that works via HTTPS in Bolt, with no Data API workaround needed — the Firebase JS SDK uses HTTP and WebSocket natively.
MySQL faces the same TCP limitation in Bolt's WebContainer as MongoDB, but PlanetScale's MySQL-compatible HTTP API provides a similar workaround to Atlas Data API.
Supabase provides PostgreSQL access through its HTTP-based PostgREST API and is the most seamlessly integrated database for Bolt.new with native support built in.
Frequently asked questions
Does mongoose work in Bolt.new?
No. Mongoose requires a TCP socket connection to connect to MongoDB Atlas, and Bolt's WebContainer cannot open raw TCP connections. You'll see a MongoNetworkError or connection timeout when attempting to use mongoose in the Bolt preview. The workaround is MongoDB Atlas Data API (HTTP-based) for development. After deploying to a real server like Netlify or Vercel, mongoose works normally via TCP.
Is the MongoDB Atlas Data API available on the free tier?
Yes. The Atlas Data API is available on M0 (free tier) clusters with generous limits — up to 1 million reads and 500K writes per day. This is sufficient for development and most production workloads. The Data API is enabled per data source (cluster) in the Atlas console under the Data API section.
Should I keep the Data API in production or migrate to mongoose?
Both options are valid. The Data API has higher latency (150-300ms vs 5-30ms for native TCP) but requires no additional dependencies. If your app isn't latency-sensitive or handles modest traffic, staying with the Data API in production is fine. Migrate to mongoose if you need lower latency, mongoose middleware hooks, complex aggregations, or strong TypeScript model typing.
Is Supabase a better choice than MongoDB for Bolt.new projects?
For most new Bolt projects, yes. Supabase's @supabase/supabase-js client is built on HTTP and works natively in Bolt without any Data API workarounds. It provides PostgreSQL with realtime subscriptions, auth, storage, and edge functions — all deeply integrated with Bolt. Choose MongoDB if you have existing MongoDB data to migrate, need its flexible document model, or prefer NoSQL semantics.
Can MongoDB Atlas triggers and change streams work with Bolt?
Atlas Database Triggers (which call HTTP endpoints) require a deployed URL to deliver events — Bolt's WebContainer has no public address. Change streams via the native driver won't work at all in WebContainers due to the TCP limitation. For real-time updates during development, use polling. After deployment, configure Atlas Triggers with your production API route URL.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation