To backup a Supabase database, use pg_dump with your project's connection string to create a full SQL dump. Pro plan users also get automatic daily backups and point-in-time recovery through the Dashboard. For scheduled backups, set up a cron job that runs pg_dump and stores the output in a secure location. Always test restoring from your backups to verify they work before you need them.
Backing Up Your Supabase Database with pg_dump and Dashboard Tools
Your Supabase project is a full PostgreSQL database, which means you can use standard PostgreSQL backup tools like pg_dump. This tutorial covers three backup strategies: manual pg_dump for on-demand backups, Supabase's built-in automatic backups for Pro+ plans, and scheduled cron jobs for automated recurring backups. You will also learn how to verify your backups work by testing a restore.
Prerequisites
- A Supabase project with data you want to back up
- PostgreSQL client tools installed (pg_dump, psql) — install via brew install libpq or apt-get install postgresql-client
- Your database connection string from Dashboard > Settings > Database
- Terminal/command line access
Step-by-step guide
Find your database connection string
Find your database connection string
Go to your Supabase Dashboard and navigate to Settings > Database. Copy the connection string under the Connection string section. Select the URI format. This string contains your host, port, database name, and credentials. For pg_dump, use the direct connection (port 5432), not the connection pooler (port 6543), because pg_dump requires a direct connection for schema operations.
1# Connection string format:2# postgresql://postgres.[project-ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres34# Find it in Dashboard > Settings > Database > Connection string > URIExpected result: You have the direct database connection string copied and ready to use.
Create a manual backup with pg_dump
Create a manual backup with pg_dump
Run pg_dump with your connection string to create a full SQL dump of your database. The --clean flag adds DROP statements before CREATE statements, making the dump restorable to a fresh database. The --if-exists flag prevents errors if objects do not exist during restore. Pipe the output to a file with a timestamp for easy identification.
1# Full database backup to a timestamped SQL file2pg_dump "postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres" \3 --clean \4 --if-exists \5 --no-owner \6 --no-privileges \7 > backup_$(date +%Y%m%d_%H%M%S).sql89# Compressed backup (recommended for large databases)10pg_dump "postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres" \11 --clean \12 --if-exists \13 --no-owner \14 --no-privileges \15 | gzip > backup_$(date +%Y%m%d_%H%M%S).sql.gz1617# Schema-only backup (no data)18pg_dump "postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres" \19 --schema-only \20 > schema_$(date +%Y%m%d_%H%M%S).sqlExpected result: A .sql or .sql.gz backup file is created in your current directory.
Back up specific schemas or tables
Back up specific schemas or tables
You can back up specific schemas or tables instead of the entire database. This is useful when you only need to back up your application data (public schema) without system schemas. You can also back up individual tables if you need a quick snapshot of specific data.
1# Back up only the public schema2pg_dump "postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres" \3 --schema=public \4 --clean \5 --if-exists \6 > public_schema_backup.sql78# Back up a specific table9pg_dump "postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres" \10 --table=public.orders \11 --data-only \12 > orders_data_backup.sql1314# Back up storage metadata (bucket configs and policies)15pg_dump "postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres" \16 --schema=storage \17 > storage_schema_backup.sqlExpected result: Targeted backup files are created for the specified schemas or tables.
Use automatic daily backups on Pro plans
Use automatic daily backups on Pro plans
Supabase Pro plan and above includes automatic daily backups with 7-day retention. Go to Dashboard > Database > Backups to view available backups and their timestamps. Pro plan also offers point-in-time recovery (PITR), allowing you to restore your database to any second within the retention window. These backups are managed by Supabase and require no configuration.
1# No code needed — automatic backups are managed by Supabase2# View backups: Dashboard > Database > Backups3# PITR: Available on Pro plan, 7-day retention window4# To restore: Contact Supabase support or use the Dashboard restore buttonExpected result: Automatic backups are visible in the Dashboard with timestamps and restore options.
Schedule automated backups with cron
Schedule automated backups with cron
For production systems, set up a cron job that runs pg_dump on a schedule. This script creates compressed backups, retains only the last 7 days of backups, and can optionally upload to cloud storage. Save the script and add it to your system's crontab.
1#!/bin/bash2# backup-supabase.sh — Run daily via cron34DB_URL="postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres"5BACKUP_DIR="/home/user/backups/supabase"6RETENTION_DAYS=778mkdir -p $BACKUP_DIR910# Create compressed backup11pg_dump "$DB_URL" --clean --if-exists --no-owner --no-privileges \12 | gzip > "$BACKUP_DIR/backup_$(date +%Y%m%d_%H%M%S).sql.gz"1314# Remove backups older than retention period15find $BACKUP_DIR -name "backup_*.sql.gz" -mtime +$RETENTION_DAYS -delete1617echo "Backup completed: $(date)"1819# Add to crontab (run daily at 2 AM):20# crontab -e21# 0 2 * * * /home/user/scripts/backup-supabase.sh >> /home/user/logs/backup.log 2>&1Expected result: Automated backups run daily, creating compressed SQL files with automatic cleanup of old backups.
Complete working example
1#!/bin/bash2# Supabase Database Backup Script3# Schedule with cron: 0 2 * * * /path/to/backup-supabase.sh45set -euo pipefail67# Configuration8DB_URL="${SUPABASE_DB_URL:?'Set SUPABASE_DB_URL environment variable'}"9BACKUP_DIR="${BACKUP_DIR:-/home/user/backups/supabase}"10RETENTION_DAYS="${RETENTION_DAYS:-7}"11TIMESTAMP=$(date +%Y%m%d_%H%M%S)1213# Create backup directory14mkdir -p "$BACKUP_DIR"1516echo "[$(date)] Starting Supabase backup..."1718# Full compressed backup19pg_dump "$DB_URL" \20 --clean \21 --if-exists \22 --no-owner \23 --no-privileges \24 --format=custom \25 --file="$BACKUP_DIR/backup_${TIMESTAMP}.dump"2627# Verify the backup file is not empty28BACKUP_SIZE=$(stat -f%z "$BACKUP_DIR/backup_${TIMESTAMP}.dump" 2>/dev/null || stat --printf="%s" "$BACKUP_DIR/backup_${TIMESTAMP}.dump")29if [ "$BACKUP_SIZE" -lt 1000 ]; then30 echo "[$(date)] ERROR: Backup file suspiciously small ($BACKUP_SIZE bytes)"31 exit 132fi3334echo "[$(date)] Backup created: backup_${TIMESTAMP}.dump ($BACKUP_SIZE bytes)"3536# Clean up old backups37DELETED=$(find "$BACKUP_DIR" -name "backup_*.dump" -mtime +"$RETENTION_DAYS" -delete -print | wc -l)38echo "[$(date)] Cleaned up $DELETED old backup(s)"3940# Optional: Upload to S3 or cloud storage41# aws s3 cp "$BACKUP_DIR/backup_${TIMESTAMP}.dump" \42# "s3://my-backups/supabase/backup_${TIMESTAMP}.dump"4344echo "[$(date)] Backup complete"Common mistakes when backing up a Supabase Database
Why it's a problem: Using the connection pooler URL (port 6543) for pg_dump instead of the direct connection (port 5432)
How to avoid: pg_dump requires a direct connection to PostgreSQL. Use port 5432, not 6543. Check your connection string in Dashboard > Settings > Database.
Why it's a problem: Never testing backup restores, then discovering the backup is corrupted or incomplete when needed
How to avoid: Regularly test restores to a local Supabase instance (supabase start, then psql < backup.sql) or a separate test project. A backup you have not tested is not a real backup.
Why it's a problem: Storing backup files with database credentials in a public or unencrypted location
How to avoid: Store backups in an encrypted location with restricted access. Encrypt backup files with gpg if storing on disk. Never commit backup files or connection strings to version control.
Best practices
- Run pg_dump with --clean --if-exists --no-owner --no-privileges for portable, restorable backups
- Use compressed format (--format=custom or gzip) for large databases to save storage space
- Maintain your own pg_dump backups even if you have Supabase's automatic daily backups on Pro plans
- Test restoring from backups regularly to verify integrity — at least once per month
- Store backups in a different location than your database (separate cloud provider, different region)
- Use environment variables for database credentials in backup scripts, never hardcode them
- Keep a minimum of 7 days of backup retention, with longer retention for critical production databases
- Log backup operations with timestamps so you can audit when backups ran and whether they succeeded
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I need to set up automated daily backups for my Supabase PostgreSQL database. Show me a bash script that runs pg_dump, compresses the output, uploads it to AWS S3, and cleans up backups older than 30 days. Include error handling and logging.
Create a backup strategy for my Supabase project that includes: a pg_dump script for daily full backups, a separate schema-only backup for version control, and instructions for testing the restore process on a local Supabase instance.
Frequently asked questions
Does Supabase automatically back up my database?
Yes, but only on Pro plans and above. Free plan projects do not have automatic backups. Pro plan includes daily backups with 7-day retention and point-in-time recovery. Regardless of your plan, you should maintain your own pg_dump backups.
Can I use pg_dump on the free plan?
Yes. pg_dump works with any Supabase plan because it connects directly to your PostgreSQL database. The connection string is available in Dashboard > Settings > Database regardless of your plan.
How large can a pg_dump backup file get?
It depends on your data. A database with 1 million rows might produce a 500 MB SQL file. Using --format=custom or gzip compression typically reduces the file size by 70-90%. For very large databases, consider backing up individual schemas or tables.
Does pg_dump back up RLS policies and functions?
Yes. pg_dump includes table definitions, RLS policies, functions, triggers, indexes, and all other database objects. Use --schema-only if you want structure without data.
Can I restore a backup to a different Supabase project?
Yes. Use psql with the target project's connection string to restore: psql 'postgresql://...' < backup.sql. Use --no-owner and --no-privileges when creating the backup to avoid permission issues.
What about backing up Supabase Storage files?
pg_dump backs up storage metadata (bucket configurations, policies) but not the actual files stored in S3. For file backups, use the storage API to list and download files programmatically, or use the Supabase CLI.
Can RapidDev set up automated backups for my Supabase project?
Yes. RapidDev can configure automated backup pipelines including scheduled pg_dump scripts, cloud storage uploads, backup verification, and disaster recovery procedures for your Supabase database.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation