FlutterFlow does not have a built-in database backup button — backups are a Firebase infrastructure responsibility. For Firestore, the best setup is a daily automated backup using Cloud Scheduler and a Cloud Function that calls admin.firestore().exportDocuments() to a Cloud Storage bucket. Enable Firestore's Point-in-Time Recovery (PITR) as an additional safety net. Never rely only on manual exports — the gap between your last manual backup and a disaster is the data you permanently lose.
Automated daily Firestore exports, PITR, and verified restore procedures
Every FlutterFlow app with user data needs a database backup strategy. Firestore does not auto-backup by default — you need to set this up yourself. This tutorial covers four backup approaches in order of protection level: manual one-time export (good for migrations), automated daily Cloud Function export (essential baseline), Firestore PITR (premium safety net), and collection-level exports (for surgical recovery). It also covers the step most developers skip: testing the restore process. A backup you have never tested restoring is not a backup — it is hope.
Prerequisites
- Firebase project on the Blaze (pay-as-you-go) plan — Cloud Functions and Cloud Scheduler require Blaze
- Firebase Console access with Owner or Editor role on the project
- A Cloud Storage bucket in the same Firebase project — create one at console.firebase.google.com → Storage if not already present
- Basic familiarity with the Firebase Console interface
Step-by-step guide
Perform a manual Firestore export in Firebase Console
Perform a manual Firestore export in Firebase Console
Log into Firebase Console at console.firebase.google.com. Select your project. In the left sidebar, click Firestore Database. Click the Import/Export button in the top-right area of the Firestore panel. Click Export. Choose a Cloud Storage destination: select your default Firebase Storage bucket (it is displayed as gs://your-project.appspot.com). Add a folder path like backups/manual/2026-03-29. Leave Collection IDs blank to export the entire database. Click Export. The export runs as an asynchronous operation — you can monitor progress in the Firebase Console Operations panel. A full export of a typical FlutterFlow app (under 100,000 documents) takes 1-5 minutes. The exported files appear in your Cloud Storage bucket as a folder with metadata and data files.
Expected result: A backup folder appears in your Cloud Storage bucket containing Firestore export files. The Firebase Console Operations panel shows the export completed successfully.
Create a Cloud Storage bucket dedicated to backups
Create a Cloud Storage bucket dedicated to backups
Using your default Firebase Storage bucket for backups mixes user-uploaded files with database exports, complicating lifecycle management. Create a separate bucket: go to Google Cloud Console at console.cloud.google.com → Cloud Storage → Buckets → Create. Name it your-project-id-firestore-backups. Set location type to the same region as your Firestore database (check Firestore → Data → your database location). Set default storage class to Nearline (designed for infrequently accessed data, lower storage cost). Enable Versioning is optional. Set a lifecycle rule: delete objects older than 30 days. This lifecycle rule automatically prunes old backups without manual cleanup. Click Create. Grant the Firebase Admin SDK service account (firebase-adminsdk@your-project.iam.gserviceaccount.com) the Storage Admin role on this bucket via IAM & Admin.
Expected result: A dedicated backup Cloud Storage bucket exists with a 30-day lifecycle deletion rule and the Firebase service account has write access.
Build the automated daily backup Cloud Function with Cloud Scheduler
Build the automated daily backup Cloud Function with Cloud Scheduler
Create a Firebase Cloud Function named firestoreDailyBackup using the Cloud Scheduler trigger. Set the schedule to '0 2 * * *' (runs at 2:00 AM UTC daily, during low-traffic hours). In the function, build the destination URI using today's date for organized folder structure: gs://your-project-id-firestore-backups/daily/YYYY-MM-DD. Call admin.firestore().exportDocuments({ outputUriPrefix: destinationUri, collectionIds: [] }) where empty collectionIds means all collections. After the export call, write a backup_log document to Firestore with status, timestamp, destinationUri, and any error message. Set up a Firebase Alert: if the backup_log document shows status 'error', send an email notification via Firebase Extensions or a Cloud Function trigger.
1const { onSchedule } = require('firebase-functions/v2/scheduler');2const admin = require('firebase-admin');34if (!admin.apps.length) admin.initializeApp();56exports.firestoreDailyBackup = onSchedule(7 {8 schedule: '0 2 * * *',9 timeZone: 'UTC',10 retryCount: 311 },12 async () => {13 const db = admin.firestore();14 const now = new Date();15 const dateStr = now.toISOString().split('T')[0]; // YYYY-MM-DD1617 const projectId = process.env.GCLOUD_PROJECT;18 const bucket = `${projectId}-firestore-backups`;19 const outputUri = `gs://${bucket}/daily/${dateStr}`;2021 let status = 'success';22 let errorMessage = null;2324 try {25 const client = new admin.firestore.v1.FirestoreAdminClient();26 const databaseName = client.databasePath(projectId, '(default)');27 const [operation] = await client.exportDocuments({28 name: databaseName,29 outputUriPrefix: outputUri,30 collectionIds: [] // empty = export all collections31 });32 console.log(`Backup started: operation ${operation.name}`);33 } catch (err) {34 status = 'error';35 errorMessage = err.message;36 console.error('Backup failed:', err);37 }3839 // Log backup result to Firestore for monitoring40 await db.collection('backup_logs').add({41 date: dateStr,42 destinationUri: outputUri,43 status,44 errorMessage,45 createdAt: admin.firestore.FieldValue.serverTimestamp()46 });47 }48);Expected result: Cloud Function deploys successfully. The next morning after 2:00 AM UTC, a new folder appears in the backup bucket and a backup_log document in Firestore confirms the backup status.
Enable Firestore Point-in-Time Recovery (PITR)
Enable Firestore Point-in-Time Recovery (PITR)
Firestore PITR is a built-in feature on the Blaze plan that lets you restore data to any point within the last 7 days, without maintaining separate export files. To enable: go to Firebase Console → Firestore → Data → your database. Click Settings (gear icon) near the database name. Toggle Enable Point-in-Time Recovery to on. PITR costs approximately $0.10 per GB per month additional. Once enabled, you can restore a single document, a collection, or the entire database to any timestamp within the retention window using the Firebase CLI or the Firestore API. PITR is not a replacement for daily exports — it only covers 7 days, and a ransomware attack or gradual data corruption discovered 8 days later is not recoverable. Use PITR + daily exports together.
Expected result: PITR is shown as enabled in the Firestore database settings. The Firebase Console confirms the retention period of 7 days is active.
Verify backups by restoring to a test project
Verify backups by restoring to a test project
A backup is only valuable if it restores successfully. At least once per quarter, test the restore process. Create a new Firebase project (free) named your-app-restore-test. In Firebase Console on the test project, go to Firestore → Import/Export → Import. Enter the source path: gs://your-project-id-firestore-backups/daily/[most recent date]. Click Import. After the import completes, verify in the test project's Firestore Console that key collections (users, orders, etc.) contain the expected data with reasonable document counts. This confirms your backup files are valid and the restore process works. Delete the test project after verification to avoid ongoing costs. Document the restore procedure in a runbook so that in a real incident, the team can restore quickly under pressure.
Expected result: Test Firebase project successfully imports from the backup export. Key collections show document counts matching the production database snapshot.
Complete working example
1const { onSchedule } = require('firebase-functions/v2/scheduler');2const admin = require('firebase-admin');34if (!admin.apps.length) admin.initializeApp();56// Automated daily Firestore backup7// Runs at 2:00 AM UTC every day8// Exports to gs://{project-id}-firestore-backups/daily/YYYY-MM-DD9exports.firestoreDailyBackup = onSchedule(10 {11 schedule: '0 2 * * *',12 timeZone: 'UTC',13 retryCount: 3,14 memory: '256MiB'15 },16 async () => {17 const db = admin.firestore();18 const projectId = process.env.GCLOUD_PROJECT;19 const now = new Date();20 const dateStr = now.toISOString().split('T')[0];2122 const backupBucket = `${projectId}-firestore-backups`;23 const outputUri = `gs://${backupBucket}/daily/${dateStr}`;2425 let status = 'success';26 let errorMessage = null;27 let operationName = null;2829 try {30 const client = new admin.firestore.v1.FirestoreAdminClient();31 const databaseName = client.databasePath(projectId, '(default)');3233 const [operation] = await client.exportDocuments({34 name: databaseName,35 outputUriPrefix: outputUri,36 collectionIds: [] // empty array = export ALL collections37 });3839 operationName = operation.name;40 console.log(`Backup initiated: ${operationName}`);41 console.log(`Destination: ${outputUri}`);42 } catch (err) {43 status = 'error';44 errorMessage = err.message;45 console.error('Backup failed:', err.message);46 }4748 // Log to Firestore for monitoring dashboard49 await db.collection('backup_logs').add({50 date: dateStr,51 destinationUri: outputUri,52 operationName,53 status,54 errorMessage,55 createdAt: admin.firestore.FieldValue.serverTimestamp()56 });57 }58);Common mistakes when backing up Your FlutterFlow Database (Firestore)
Why it's a problem: Only doing manual exports and going weeks or months between backups
How to avoid: Automate backups on day one of production traffic. The Cloud Scheduler + Cloud Function setup in Step 3 takes under an hour to implement and runs forever without intervention. Check backup_logs weekly with a simple Firestore query to confirm backups are succeeding.
Why it's a problem: Treating PITR as a full backup solution and not maintaining export files
How to avoid: Use PITR as a supplement for recent accidental changes (deleted a document 2 hours ago) and daily export files as the primary backup for disaster recovery scenarios. Both protect against different failure modes. Export files stored in Cloud Storage can be retained for months or years.
Why it's a problem: Storing backup exports in the same Firebase project as the production database
How to avoid: Export backups to a Cloud Storage bucket in a separate Google Cloud project. This requires setting up cross-project IAM permissions for the Firebase Admin SDK service account. For maximum protection, configure the backup bucket to a different geographic region than your production Firestore.
Best practices
- Test your backup restore process at least once before you need it — never assume a backup works without verifying the restore in a separate test project
- Build a simple monitoring dashboard in FlutterFlow on an admin page that queries the backup_logs collection to show the last 7 days of backup status — gives instant visibility without checking Cloud Console
- Set Cloud Storage lifecycle rules on the backup bucket to delete exports older than 90 days and automatically transition to Coldline storage after 30 days for cost optimization
- Schedule backups during off-peak hours (2-4 AM in your primary user timezone) to minimize impact on production read/write performance during the export operation
- After each backup, log the estimated document count from the Firestore database size metrics alongside the backup log — makes it easy to detect if a backup captured significantly fewer documents than expected
- Keep at least 3 months of daily backups in retention — regulatory requirements in many industries (healthcare, finance, legal) mandate specific data retention periods beyond what PITR covers
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I have a FlutterFlow app with Firestore as the database. Write me: (1) A Firebase Cloud Function using the v2 scheduler API that runs daily at 2 AM UTC and exports the entire Firestore database to a Cloud Storage bucket, (2) How to set up a lifecycle policy on the backup bucket to delete exports older than 30 days, (3) How to enable Firestore Point-in-Time Recovery (PITR), and (4) How to test restoring from a Firestore export file to a separate Firebase test project.
Create an admin page in my FlutterFlow app that shows a table of the last 14 days of database backup results from the backup_logs Firestore collection. Each row should show the date, status (green check for success, red X for error), and destination URI. Add a refresh button that re-queries the collection.
Frequently asked questions
How much does automated Firestore backup storage cost?
Firestore export files stored in Google Cloud Storage cost approximately $0.02 per GB per month for Standard storage, $0.01 per GB for Nearline (30-day minimum), and $0.004 per GB for Coldline (90-day minimum). A typical FlutterFlow app database with 100,000 documents and 1KB average document size is about 100MB exported — roughly $0.002 per month for Standard storage. For 90 days of daily backups at 100MB each, total storage is 9GB, costing about $0.18/month. The export operation itself incurs standard Firestore read costs ($0.06/100K reads).
Does FlutterFlow's Supabase integration have automatic backups?
Yes. Supabase Pro plan ($25/month) includes daily automated database backups with 7-day retention and point-in-time recovery up to 7 days. For FlutterFlow projects using Supabase instead of Firestore, backup is handled automatically. You can access backups in the Supabase Dashboard → Settings → Database → Backups. No Cloud Functions are needed for Supabase backup.
Can I back up specific collections instead of the entire Firestore database?
Yes. In the exportDocuments call, pass an array of collection IDs to the collectionIds parameter instead of an empty array. For example: collectionIds: ['users', 'orders', 'products'] exports only those three collections. This is useful when your database has large collections of logs or analytics data that do not need to be backed up with the same frequency as business-critical data.
How long does a Firestore export take?
Export time depends on database size. A small database (under 10,000 documents) exports in under 30 seconds. A medium database (100,000-1,000,000 documents) takes 2-15 minutes. A large database (10+ million documents) can take over an hour. The export operation runs asynchronously — the Cloud Function initiates it and it continues running even after the function returns. Monitor operation completion via the Firebase Console → Firestore → Import/Export → recent operations.
What is the difference between a Firestore export and a Firestore backup?
In Firebase/Google Cloud terminology, 'export' (via exportDocuments) creates a point-in-time snapshot of your database written to Cloud Storage files you manage. 'Backup' (via PITR) is a managed Google service that stores change history automatically, allowing rollback to any point within the retention window without manual file management. Exports give you portable files you can import anywhere. PITR is faster for recent accidental changes but cannot be exported to another project.
Can RapidDev help set up automated backups and a monitoring dashboard for my FlutterFlow app?
Yes. The backup infrastructure setup (Cloud Scheduler, dedicated backup bucket with lifecycle rules, backup monitoring dashboard in FlutterFlow) is something RapidDev implements as a standard production readiness package for FlutterFlow apps moving from development to production. This includes configuring alerts when backups fail and documenting the restore procedure for your team.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation