Docker cannot run inside Bolt.new's WebContainer browser environment, but you can containerize your Bolt project after exporting it. Push your Bolt project to GitHub, clone it locally, and add a Dockerfile with a multi-stage build. Run the container locally to verify it works, then deploy the Docker image to AWS ECS, Google Cloud Run, Railway, or any container platform. The exported project is a standard Node.js app that works with Docker without modification.
Containerizing Bolt.new Projects with Docker for Production Deployment
Bolt.new's WebContainer is itself a containerization technology of sorts — it isolates your development environment inside a browser tab. But Docker containers are fundamentally different: they package a complete OS filesystem, application runtime, and code into a portable image that can be deployed to any server. You cannot run Docker inside a browser, and you cannot run Bolt's WebContainer on most Docker deployment platforms. The two technologies are complementary, not competing.
The workflow is clean and practical. You use Bolt for what it excels at — rapid AI-assisted UI development, component generation, and integration prototyping. When the app reaches a state ready for production-grade deployment, you export the code (via Bolt's GitHub integration), add Docker configuration files, and build a container image. The container can then be pushed to Docker Hub, AWS ECR, Google Container Registry, or any other container registry, and deployed to platforms like Railway, Render, Google Cloud Run, AWS ECS, or a self-managed Kubernetes cluster.
The key insight is that Bolt generates standard Vite or Next.js projects — well-understood frameworks with established Docker patterns. A multi-stage Dockerfile that builds the app in one stage and serves it in a smaller production stage is the standard approach. For Vite apps (which produce static files), you serve the output with nginx. For Next.js apps (which can include server-side rendering), you run a Node.js server. Both patterns are well-documented and the configurations below work with Bolt-generated code without modification.
Integration method
Docker operates entirely outside Bolt.new's browser-based environment — you cannot run Docker inside Bolt's WebContainer. The integration is a post-export workflow: use Bolt for AI-assisted development, push the code to GitHub, clone it locally, and add Docker configuration files. Docker then packages your Bolt-generated app for deployment to any container-compatible cloud platform. This approach gives Bolt projects access to the entire Docker ecosystem: container registries, orchestration platforms, and cloud deployment services.
Prerequisites
- A Bolt.new project pushed to a GitHub repository (use Bolt's Git panel to push to GitHub first)
- Docker Desktop installed on your local machine (download at docker.com/products/docker-desktop for Mac and Windows, or Docker Engine for Linux)
- Git installed locally to clone the repository from GitHub
- Basic understanding of terminal commands (cd, ls, running commands)
- A cloud platform account for deployment: Railway (railway.app), Render (render.com), Google Cloud, or AWS — all have free tiers that support Docker containers
Step-by-step guide
Push Your Bolt Project to GitHub and Clone Locally
Push Your Bolt Project to GitHub and Clone Locally
Docker configuration files need to be added to your project locally — you cannot run Docker commands from Bolt's WebContainer browser environment. The first step is getting your Bolt project onto your local machine through GitHub. In Bolt, open the Git panel and push your project to a GitHub repository if you have not already. Once the push completes, open a terminal on your local machine and clone the repository. Navigate into the project directory and install dependencies to verify the project is intact. At this point, verify that the Bolt project builds successfully in a clean local environment by running the build command. This is an important check because Bolt's WebContainer and a local Node.js installation can sometimes behave differently — TypeScript strict mode settings, peer dependency requirements, and environment variable handling may surface issues that were hidden in Bolt's environment. Fix any build errors before proceeding to Docker configuration. Note a critical architectural difference that Docker resolves: in Bolt's WebContainer, native Node.js modules that require compilation (like bcrypt, sharp, or canvas) fail because WebContainers enforce a no-addons flag. In a Docker container running real Node.js on Linux, these modules compile and run normally. If your Bolt project had to use JavaScript alternatives to these modules (e.g., bcryptjs instead of bcrypt), you can switch to the native version after adding Docker. However, if the app uses Supabase and only makes HTTP calls, no changes are needed — the exported code works as-is.
1# Clone and set up the project locally:2git clone https://github.com/yourusername/your-bolt-project.git3cd your-bolt-project4npm install56# Verify the project builds without errors:7npm run build89# If build succeeds, you should see a dist/ directory (Vite)10# or a .next/ directory (Next.js)11ls -laPro tip: If npm run build fails locally but succeeded in Bolt's preview, the most common causes are: missing environment variables (create a local .env file with placeholder values), TypeScript strict mode errors that Bolt's dev server was tolerating, or Node.js version mismatches (check package.json engines field and use nvm to switch versions).
Expected result: The project cloned successfully, npm install completed, and npm run build produced a dist/ (Vite) or .next/ (Next.js) directory. The project is ready for Docker configuration.
Create a Dockerfile for Your Bolt Project
Create a Dockerfile for Your Bolt Project
The Dockerfile defines how to package your Bolt app into a container image. The multi-stage build pattern is the best practice for production containers — a separate build stage with all development dependencies produces a smaller, more secure final image. For Vite projects (Bolt's default when you ask for a React app), the output is a folder of static HTML, CSS, and JavaScript files. The most efficient way to serve these is with nginx, a high-performance web server. The build stage uses Node.js to run npm run build, and the final stage copies only the built files into an nginx container. The resulting image is typically 20-40 MB — much smaller than a Node.js-based server. For Next.js projects (Bolt generates these when you request Next.js or server-side features), the app needs to run as a Node.js server since it may include server-side rendering or API routes. The multi-stage build installs dependencies, runs npm run build with the standalone output mode enabled in next.config.js, and the final stage runs only the standalone server output. The Dockerfiles below are production-ready for Bolt-generated projects. Choose the appropriate one based on your project type — you can determine this by checking whether your project has a vite.config.ts (Vite) or next.config.js (Next.js) file.
1# Dockerfile for Vite/React Bolt projects (static output served by nginx):23# Build stage4FROM node:18-alpine AS builder56WORKDIR /app78# Copy package files first for better layer caching9COPY package*.json ./10RUN npm ci --only=production=false1112# Copy source files and build13COPY . .14RUN npm run build1516# Production stage17FROM nginx:alpine AS production1819# Copy built files from build stage20COPY --from=builder /app/dist /usr/share/nginx/html2122# Copy nginx config for SPA routing23COPY nginx.conf /etc/nginx/conf.d/default.conf2425EXPOSE 802627CMD ["nginx", "-g", "daemon off;"]Pro tip: Add a .dockerignore file to your project root to prevent node_modules/, .next/, dist/, and .env files from being copied into the Docker build context — this makes builds significantly faster and prevents accidentally including secrets.
Expected result: A Dockerfile exists in the project root. Running 'docker build -t my-bolt-app .' should complete without errors and produce a container image.
Add Supporting Configuration Files
Add Supporting Configuration Files
The Dockerfile alone is not quite complete — Vite SPAs need a special nginx configuration for client-side routing to work, and all projects benefit from a .dockerignore file to speed up builds. For Vite Bolt projects using React Router or Wouter for navigation, nginx needs to be configured to serve index.html for all routes (not just the root path). Without this, navigating directly to a URL like /dashboard will return a 404 from nginx. The nginx.conf below fixes this with a try_files directive that falls back to index.html for any unresolved path. The .dockerignore file is conceptually similar to .gitignore — it prevents specified directories and files from being sent to the Docker daemon as part of the build context. Including node_modules/ (typically 200-500 MB) in the build context makes every docker build command take minutes to upload, even if nothing changed. Excluding it ensures Docker only processes your source files. For Next.js Bolt projects, you also need to update next.config.js to enable standalone output mode. This causes Next.js to output a self-contained server in .next/standalone that can be deployed without installing node_modules — making the Docker image significantly smaller and faster to start. After adding these files, commit them to your repository and push to GitHub. Bolt will see these new files when you next pull from GitHub, so the Docker configuration becomes part of the Bolt project going forward.
1# nginx.conf for Vite SPA routing:2server {3 listen 80;4 server_name _;5 root /usr/share/nginx/html;6 index index.html;78 # Serve static files directly, fallback to index.html for SPA routing9 location / {10 try_files $uri $uri/ /index.html;11 }1213 # Cache static assets for 1 year14 location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2)$ {15 expires 1y;16 add_header Cache-Control "public, immutable";17 }1819 # Don't cache HTML20 location ~* \.html$ {21 expires -1;22 add_header Cache-Control "no-store";23 }24}2526---2728# .dockerignore:29node_modules30dist31.next32build33.env34.env.local35.env.*.local36npm-debug.log*37.DS_Store38.git39.gitignore40README.md41*.mdPro tip: Test the nginx configuration locally before deploying. Run 'docker build -t test-app .' then 'docker run -p 8080:80 test-app' and visit http://localhost:8080. Try navigating to a deep route like /dashboard to confirm the SPA fallback works.
Expected result: The nginx.conf and .dockerignore files are in the project root. Docker builds now skip node_modules and complete in 30-60 seconds. Vite SPA routes work correctly inside the container.
Add docker-compose for Local Development with a Database
Add docker-compose for Local Development with a Database
If your Bolt project uses Supabase in production but you want to run a real database locally (outside Bolt's WebContainer limitations), docker-compose makes this straightforward. You can run your Bolt app container alongside a PostgreSQL container on your local machine, replicating the production architecture without any cloud services. An important context here: in Bolt's WebContainer, you cannot use raw TCP database connections — the standard pg npm package that connects directly to PostgreSQL fails because WebContainers cannot open TCP sockets. Bolt solves this by using Supabase's HTTP-based PostgREST interface. But once you have exported your project and are running it in Docker, real TCP connections work perfectly. This means locally with Docker, you can run a PostgreSQL container and connect to it using the pg package or Prisma directly — a workflow impossible in Bolt's browser environment. The docker-compose.yml below defines two services: your Bolt app (built from the Dockerfile) and a PostgreSQL database. The app service depends on the database, so docker-compose starts PostgreSQL first. Environment variables pass the database connection string to the app. A named volume persists the database data between container restarts so you do not lose your development data. Note that this docker-compose setup is for local development only — it is not the recommended production deployment pattern. For production, use a managed database (Supabase, Neon, PlanetScale) and deploy the app container separately to your chosen cloud platform.
1# docker-compose.yml for local development:2version: '3.8'34services:5 app:6 build: .7 ports:8 - '3000:80' # Vite nginx: 809 # - '3000:3000' # Next.js: 300010 environment:11 - NODE_ENV=development12 - VITE_SUPABASE_URL=http://localhost:54321 # Local Supabase alternative13 # Or use Postgres directly (requires code changes from Supabase HTTP client):14 - DATABASE_URL=postgresql://postgres:postgres@db:5432/myapp15 depends_on:16 db:17 condition: service_healthy18 volumes:19 - ./src:/app/src # Mount src for hot-reloading (dev mode)2021 db:22 image: postgres:15-alpine23 environment:24 POSTGRES_DB: myapp25 POSTGRES_USER: postgres26 POSTGRES_PASSWORD: postgres27 ports:28 - '5432:5432' # Expose for direct database client access29 volumes:30 - postgres_data:/var/lib/postgresql/data31 healthcheck:32 test: ['CMD-SHELL', 'pg_isready -U postgres']33 interval: 10s34 timeout: 5s35 retries: 53637volumes:38 postgres_data:Pro tip: Run 'docker-compose up --build' to build and start both containers, or 'docker-compose up -d' to run in the background. Use 'docker-compose logs -f app' to tail the application logs. Stop everything with 'docker-compose down'.
Expected result: Running 'docker-compose up' starts both the app and database containers. The app is accessible at http://localhost:3000 and can make real TCP database connections to the PostgreSQL container.
Deploy the Docker Image to a Cloud Platform
Deploy the Docker Image to a Cloud Platform
With a working Dockerfile, deploying to cloud container platforms is straightforward. The most developer-friendly options for Bolt projects are Railway and Render, which connect directly to your GitHub repository and build the Docker image automatically on every push. This creates the same automation you get from Bolt's publish button, but via Docker with container-platform capabilities. For Railway: create a new project at railway.app, click 'Deploy from GitHub repo,' select the repository, and Railway automatically detects the Dockerfile and builds it. Add environment variables in Railway's 'Variables' tab (SUPABASE_URL, SUPABASE_ANON_KEY, and any other keys your app needs). Railway provides a public URL with automatic HTTPS. The Starter plan is free with limited usage. For Google Cloud Run (for higher traffic or team use): build and push the Docker image to Google Container Registry, then create a Cloud Run service pointing to that image. Cloud Run scales to zero when not in use, making it cost-effective for Bolt prototypes that do not receive constant traffic. Regarding the WebContainer limitation and deployment: Bolt's browser-based development environment cannot receive incoming webhooks, run persistent background processes, or use raw TCP sockets. Once your app is deployed as a Docker container, all of these work normally. If your Bolt app integrated with services that require webhooks (Stripe, GitHub Apps, Twilio), register your Docker deployment URL as the webhook endpoint — not the Bolt preview URL. Deploy first, then configure the webhook URLs.
1# Build and push Docker image to Docker Hub (then deploy anywhere):2docker build -t yourusername/your-bolt-app:latest .3docker push yourusername/your-bolt-app:latest45# Or build and deploy to Google Cloud Run:6gcloud auth login7gcloud config set project your-project-id89docker build -t gcr.io/your-project-id/bolt-app:latest .10docker push gcr.io/your-project-id/bolt-app:latest1112gcloud run deploy bolt-app \13 --image gcr.io/your-project-id/bolt-app:latest \14 --platform managed \15 --region us-central1 \16 --allow-unauthenticated \17 --set-env-vars SUPABASE_URL=https://yourproject.supabase.coPro tip: For the fastest iteration cycle, use Railway's GitHub integration rather than manually building and pushing images. Railway watches your repository, detects the Dockerfile, and rebuilds automatically on every push — including pushes from Bolt's Git panel.
Expected result: Your Bolt app is running as a Docker container in the cloud with a public HTTPS URL. The deployment works independently of Bolt's browser environment, with full access to TCP networking, persistent storage, and webhook reception.
Common use cases
Deploying a Bolt App to Railway or Render
Add a Dockerfile to a Bolt project and connect the GitHub repository to Railway or Render for automated container deployments. Every push from Bolt to GitHub triggers a new Docker build and deployment, giving you a production-grade URL with automatic HTTPS and environment variable management.
Copy this prompt to try it in Bolt.new
Local Development with a Real Database Using docker-compose
Create a docker-compose.yml that runs your Bolt app container alongside a PostgreSQL container for local development. This replicates production conditions locally, letting you test database interactions without relying on a remote Supabase instance or worrying about the WebContainer's inability to use raw TCP database drivers.
Copy this prompt to try it in Bolt.new
Consistent Deployment Environment Across Team Members
Use Docker to ensure all team members and CI/CD pipelines build and run the Bolt app in an identical environment. Eliminate 'works on my machine' issues caused by different Node.js versions or local dependencies by defining the exact runtime in the Dockerfile.
Copy this prompt to try it in Bolt.new
Troubleshooting
Docker build fails with 'exec /usr/local/bin/docker-entrypoint.sh: exec format error' on M1/M2 Mac when deploying to Linux servers
Cause: The Docker image was built on Apple Silicon (ARM64 architecture) but the deployment target is an x86_64 Linux server. The two architectures are incompatible without explicit multi-platform builds.
Solution: Use Docker's buildx command to build multi-platform images that work on both ARM and x86: 'docker buildx build --platform linux/amd64 -t your-image:latest --push .' This creates an image compatible with standard Linux cloud servers. For Railway and Render, they typically handle this automatically by building in their own Linux environment from your Dockerfile.
1# Multi-platform build for cloud deployment from Mac:2docker buildx create --use3docker buildx build \4 --platform linux/amd64,linux/arm64 \5 -t yourusername/bolt-app:latest \6 --push .The containerized Vite app returns 404 for all routes except the root path /
Cause: The default nginx configuration does not have a fallback for SPA routing. When a user navigates directly to /dashboard, nginx tries to find a file at that path, cannot find it, and returns 404 instead of serving index.html and letting React Router handle the route.
Solution: Add the nginx.conf file from Step 3 with the try_files $uri $uri/ /index.html directive. This tells nginx to serve index.html for any path that does not match a static file, allowing the React Router to handle client-side routing.
1# nginx.conf - add this location block:2location / {3 try_files $uri $uri/ /index.html;4}Environment variables are undefined in the Bolt app after containerization, even though they are set in docker-compose.yml
Cause: Vite embeds VITE_* environment variables at build time (when npm run build runs), not at container runtime. Variables set in docker-compose.yml environment are available at runtime but Vite has already finished its build — so VITE_* vars must be present during the Docker build step, not just when the container starts.
Solution: For Vite projects: pass VITE_* variables as Docker build arguments (ARG in Dockerfile, --build-arg in docker build command) so they are available when Vite processes them during the build stage. For Next.js NEXT_PUBLIC_* variables, the same applies. Server-side environment variables (without a prefix) can be set at runtime in docker-compose.yml or on the cloud platform.
1# Dockerfile with build args for Vite variables:2FROM node:18-alpine AS builder3WORKDIR /app45# Declare build arguments6ARG VITE_SUPABASE_URL7ARG VITE_SUPABASE_ANON_KEY89# Make them available as environment variables during build10ENV VITE_SUPABASE_URL=$VITE_SUPABASE_URL11ENV VITE_SUPABASE_ANON_KEY=$VITE_SUPABASE_ANON_KEY1213COPY package*.json ./14RUN npm ci15COPY . .16RUN npm run build1718# Build with args:19# docker build \20# --build-arg VITE_SUPABASE_URL=https://xyz.supabase.co \21# --build-arg VITE_SUPABASE_ANON_KEY=your-key \22# -t my-bolt-app .Best practices
- Always use multi-stage Dockerfile builds for Bolt projects — the builder stage with all node_modules (200-500 MB) is discarded, leaving a production image of 20-50 MB that starts faster and has a smaller attack surface.
- Add a .dockerignore file before your first docker build to exclude node_modules/ and .next/ from the build context — without it, the first build can take 5+ minutes just uploading files to the Docker daemon.
- Store sensitive environment variables (API keys, database passwords) as secrets in your cloud platform's environment configuration rather than baking them into the Docker image with ARG/ENV instructions.
- Use Railway or Render for Bolt project deployments if you are new to Docker — they handle the docker build and push automatically from GitHub without requiring local Docker commands.
- Test the Docker container locally with 'docker run -p 3000:80 your-image' before pushing to a cloud platform to catch configuration issues early, especially nginx routing for Vite SPAs.
- Pin specific versions in your Dockerfile base images (FROM node:18.17-alpine instead of FROM node:18-alpine) to prevent unexpected behavior when base images update.
- Commit the Dockerfile and .dockerignore to the GitHub repository that Bolt pushes to — this keeps the Docker configuration versioned alongside the Bolt-generated code and visible in Bolt's file tree.
Alternatives
CircleCI automates the Docker build and push process as part of a CI/CD pipeline, so you do not need to run docker build commands manually every time you push from Bolt to GitHub.
Jenkins can orchestrate Docker builds and deployments with more customization than cloud platforms, but requires self-hosted infrastructure and more configuration than Railway or Render.
For Vite Bolt apps that produce only static files, hosting on S3 with CloudFront is simpler and cheaper than Docker — no container infrastructure needed for static sites.
GitHub connects Bolt to Docker-based deployment platforms via repository webhooks — platforms like Railway and Render watch your GitHub repo and auto-build from the Dockerfile on every push.
Frequently asked questions
Can I run Docker inside Bolt.new's editor?
No — Docker requires a real operating system kernel to manage containers, and Bolt.new's WebContainer runs inside a browser tab. WebContainers virtualize Node.js using WebAssembly but cannot run Docker daemon processes or build container images. Docker is a post-export technology: use Bolt for development, then containerize the exported project for production deployment.
Will native Node.js modules like bcrypt or sharp work in a Docker container after failing in Bolt?
Yes — Docker runs on real Linux (or via Docker Desktop on your local machine), so native modules that require C/C++ compilation via node-gyp install and run correctly. Bolt's WebContainer enforces a --no-addons flag that blocks these modules. If you used JavaScript alternatives (bcryptjs, jimp) in your Bolt project to work around this limitation, you can switch to the native versions after containerization if performance is a concern.
Can I use Docker for webhooks that do not work in Bolt's preview?
Yes — once your Bolt app is running as a Docker container with a public URL, incoming webhooks work normally. Bolt's WebContainer cannot receive webhooks because it runs inside a browser tab with no public internet address. Deploy your Docker container to a cloud platform first, then register the deployment URL as the webhook endpoint in Stripe, GitHub, Twilio, or whichever service you are integrating with.
How do I update the Docker deployment after making changes in Bolt?
The recommended workflow is: make changes in Bolt, push to GitHub using Bolt's Git panel, and then either let your CI/CD pipeline (CircleCI, Railway auto-deploy) rebuild the image automatically, or run docker build and push manually. Railway and Render can be configured to automatically rebuild and redeploy the Docker image on every GitHub push, creating a seamless Bolt-to-container pipeline.
What is the difference between running my Bolt app in Docker locally versus in Bolt's preview?
Bolt's preview uses WebContainers (Node.js in a browser via WebAssembly) while Docker runs real Linux Node.js. The practical differences: Docker allows raw TCP database connections (so you can use pg or mongoose directly), supports native modules (bcrypt, sharp), can receive incoming webhooks if exposed, persists files between restarts with volumes, and runs the exact same environment as production. Bolt's preview is faster to iterate in but has the WebContainer constraints described above.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation