Build a bi-directional data tool in FlutterFlow by picking CSV files with the file_picker package, parsing them in a Custom Action, mapping fields to your Firestore schema, and writing rows in batch. Export to CSV or JSON via a Cloud Function. Batch writes cut 1000-row imports from 30+ seconds to under 5 seconds.
Why a Custom Import/Export Tool Matters
Most FlutterFlow apps start with data already living somewhere else — spreadsheets, legacy systems, or another app. Manually re-entering that data is error-prone and slow. A proper import tool lets non-technical users upload a CSV, preview the mapping, and push hundreds of records into Firestore in one tap. The export side closes the loop, letting users download their data at any time. FlutterFlow has no built-in file import widget, so you will build this with Custom Actions that call Dart packages directly, wired into a clean UI you assemble visually.
Prerequisites
- A FlutterFlow project on the Pro plan with code export enabled
- A Firebase project with Firestore configured and connected to FlutterFlow
- Basic familiarity with FlutterFlow Custom Actions and adding Dart dependencies
- A Cloud Functions environment set up (Node.js 18) for the export side
Step-by-step guide
Add file_picker and csv to your Dart dependencies
Add file_picker and csv to your Dart dependencies
Open your FlutterFlow project and navigate to Settings > Pubspec Dependencies. Add file_picker: ^6.1.1 and csv: ^6.0.0 as new entries. Click Save. FlutterFlow will trigger a package resolution — wait for the green checkmark before continuing. These two packages do all the heavy lifting: file_picker opens the native file browser on iOS and Android, while csv parses the raw file bytes into a List of rows you can iterate over in your Custom Action. Without adding them here, your Custom Action code will fail to compile.
Expected result: Both packages appear in your pubspec.yaml with green resolution status in FlutterFlow's dependency panel.
Build the import UI — upload button, preview table, and field mapper
Build the import UI — upload button, preview table, and field mapper
Create a new page called ImportExportPage. Add a Column at the top with a Button labeled 'Choose CSV File' (this triggers the file picker) and a Text widget that shows the filename once picked. Below that, add a DataTable widget to preview the first 5 rows — bind its columns to a Page State variable called previewRows (type: JSON). Below the DataTable, add a Column of Dropdown widgets, one per expected Firestore field (e.g., 'Name', 'Email', 'Amount'). Each Dropdown will be populated with the CSV header columns detected at parse time, stored in a Page State variable called csvHeaders (type: List of String). This mapper lets users tell the app which CSV column corresponds to which database field without requiring them to rename their spreadsheet first.
Expected result: The page renders with an upload button, an empty preview table, and a set of dropdowns ready to be populated after a file is chosen.
Write the parseCSV Custom Action
Write the parseCSV Custom Action
Go to Custom Code > Custom Actions and create a new action called parseCSV. Set its return type to JSON. Add a local variable fileBytes of type Uint8List. Inside the action, call FilePicker.platform.pickFiles() with type FileType.custom and allowedExtensions: ['csv']. Check that the result is not null and that bytes are available. Pass the bytes to the CsvToListConverter from the csv package and parse the result. Extract the first row as headers (csvHeaders) and the remaining rows as data (csvRows). Store both in the return JSON. Wire this action to the 'Choose CSV File' button's On Tap trigger. On completion, use Set Page State actions to populate the previewRows and csvHeaders variables so the UI refreshes automatically.
1import 'package:file_picker/file_picker.dart';2import 'package:csv/csv.dart';34Future<dynamic> parseCSV() async {5 final result = await FilePicker.platform.pickFiles(6 type: FileType.custom,7 allowedExtensions: ['csv'],8 withData: true,9 );10 if (result == null || result.files.single.bytes == null) return null;1112 final bytes = result.files.single.bytes!;13 final csvString = String.fromCharCodes(bytes);14 final rows = const CsvToListConverter().convert(csvString);1516 if (rows.isEmpty) return null;1718 final headers = rows.first.map((e) => e.toString()).toList();19 final dataRows = rows.skip(1).map((row) {20 return Map.fromIterables(headers, row.map((e) => e.toString()));21 }).toList();2223 return {24 'headers': headers,25 'rows': dataRows,26 'rowCount': dataRows.length,27 };28}Expected result: Tapping 'Choose CSV File' opens the native file browser. After selecting a .csv file, the preview table populates with the first 5 rows and the dropdowns list the detected column headers.
Write the batchImportToFirestore Custom Action
Write the batchImportToFirestore Custom Action
Create a second Custom Action called batchImportToFirestore. It accepts two parameters: mappedRows (JSON array where each object has your Firestore field names as keys) and collectionPath (String, e.g., 'contacts'). Inside the action, get a reference to FirebaseFirestore.instance. Create a WriteBatch. Loop over mappedRows — for each row, call batch.set() with a new document reference (using .doc() with no argument to auto-generate IDs). Every 499 rows, commit the current batch and start a new one (Firestore's hard limit is 500 operations per batch). After the loop, commit the final batch. Wire this action to an 'Import' button that only becomes enabled when at least one field mapping dropdown has been set. Show a CircularProgressIndicator in a Dialog while import is running.
1import 'package:cloud_firestore/cloud_firestore.dart';23Future<int> batchImportToFirestore(4 List<dynamic> mappedRows,5 String collectionPath,6) async {7 final firestore = FirebaseFirestore.instance;8 final collection = firestore.collection(collectionPath);9 WriteBatch batch = firestore.batch();10 int count = 0;11 int totalImported = 0;1213 for (final row in mappedRows) {14 final docRef = collection.doc();15 batch.set(docRef, Map<String, dynamic>.from(row as Map));16 count++;17 totalImported++;1819 if (count == 499) {20 await batch.commit();21 batch = firestore.batch();22 count = 0;23 }24 }2526 if (count > 0) {27 await batch.commit();28 }2930 return totalImported;31}Expected result: Tapping 'Import' shows a progress dialog. After completion, a Snackbar reports '342 records imported' (or however many rows were in the file).
Create the Cloud Function for CSV/JSON export
Create the Cloud Function for CSV/JSON export
In your Firebase project's functions/index.js file, write an HTTPS Callable Function called exportCollection. It accepts collectionPath and format ('csv' or 'json') as parameters. The function queries the entire collection using admin.firestore().collection(collectionPath).get(), converts the documents to an array of plain objects, then serializes them to CSV (using the json2csv npm package) or JSON. Upload the result to a Firebase Storage bucket, generate a signed URL valid for 1 hour, and return the URL to the caller. Back in FlutterFlow, create a Custom Action that calls this function using FirebaseFunctions.instance.httpsCallable('exportCollection'), then opens the returned URL in an in-app web view or triggers a browser download. Add an 'Export as CSV' and 'Export as JSON' button to the ImportExportPage.
1// functions/index.js2const functions = require('firebase-functions');3const admin = require('firebase-admin');4const { Parser } = require('json2csv');5const { v4: uuidv4 } = require('uuid');6admin.initializeApp();78exports.exportCollection = functions.https.onCall(async (data, context) => {9 if (!context.auth) throw new functions.https.HttpsError('unauthenticated', 'Login required');10 const { collectionPath, format } = data;11 const snapshot = await admin.firestore().collection(collectionPath).get();12 const docs = snapshot.docs.map(d => ({ id: d.id, ...d.data() }));1314 let content, contentType, ext;15 if (format === 'csv') {16 const parser = new Parser();17 content = parser.parse(docs);18 contentType = 'text/csv';19 ext = 'csv';20 } else {21 content = JSON.stringify(docs, null, 2);22 contentType = 'application/json';23 ext = 'json';24 }2526 const filename = `exports/${uuidv4()}.${ext}`;27 const bucket = admin.storage().bucket();28 const file = bucket.file(filename);29 await file.save(content, { contentType });30 const [url] = await file.getSignedUrl({ action: 'read', expires: Date.now() + 3600000 });31 return { url };32});Expected result: Tapping 'Export as CSV' calls the function and opens a download link. The file contains all Firestore documents in CSV format with headers matching field names.
Add import progress indicator and template download
Add import progress indicator and template download
For large imports, users need feedback beyond a spinner. Add a Page State variable importProgress (type: double, 0.0 to 1.0). Modify batchImportToFirestore to accept a progress callback and call it after each batch commit. Update importProgress with the ratio of committed rows to total rows. In FlutterFlow, add a LinearProgressIndicator widget bound to the importProgress state variable — it will animate from 0 to 100% as batches complete. Also add a 'Download Template' button that downloads a CSV with just the header row (matching your expected field names) so users know exactly how to format their data before uploading. Store this template CSV in Firebase Storage and generate a public URL when the page loads.
Expected result: A progress bar visually fills as batches are committed. Users can download the template CSV before preparing their data, reducing format errors significantly.
Complete working example
1import 'package:file_picker/file_picker.dart';2import 'package:csv/csv.dart';3import 'package:cloud_firestore/cloud_firestore.dart';45// --- STEP 1: Parse CSV file picked from device ---6Future<Map<String, dynamic>?> parseCSV() async {7 final result = await FilePicker.platform.pickFiles(8 type: FileType.custom,9 allowedExtensions: ['csv'],10 withData: true,11 );12 if (result == null || result.files.single.bytes == null) return null;1314 final bytes = result.files.single.bytes!;15 final csvString = String.fromCharCodes(bytes);16 final rows = const CsvToListConverter().convert(csvString);17 if (rows.isEmpty) return null;1819 final headers = rows.first.map((e) => e.toString()).toList();20 final dataRows = rows.skip(1).map((row) {21 return Map.fromIterables(headers, row.map((e) => e.toString()));22 }).toList();2324 return {25 'headers': headers,26 'rows': dataRows,27 'rowCount': dataRows.length,28 };29}3031// --- STEP 2: Apply field mapping from user selections ---32List<Map<String, dynamic>> applyFieldMapping(33 List<dynamic> rawRows,34 Map<String, String> mapping, // { firestoreField: csvHeader }35) {36 return rawRows.map((row) {37 final mapped = <String, dynamic>{};38 final rowMap = Map<String, dynamic>.from(row as Map);39 mapping.forEach((firestoreField, csvHeader) {40 mapped[firestoreField] = rowMap[csvHeader] ?? '';41 });42 mapped['importedAt'] = DateTime.now().toIso8601String();43 return mapped;44 }).toList();45}4647// --- STEP 3: Batch write to Firestore (max 499 per batch) ---48Future<int> batchImportToFirestore(49 List<Map<String, dynamic>> mappedRows,50 String collectionPath,51 Function(double progress)? onProgress,52) async {53 final firestore = FirebaseFirestore.instance;54 final collection = firestore.collection(collectionPath);55 WriteBatch batch = firestore.batch();56 int batchCount = 0;57 int totalImported = 0;58 final total = mappedRows.length;5960 for (final row in mappedRows) {61 batch.set(collection.doc(), row);62 batchCount++;63 totalImported++;6465 if (batchCount == 499) {66 await batch.commit();67 onProgress?.call(totalImported / total);68 batch = firestore.batch();69 batchCount = 0;70 }71 }7273 if (batchCount > 0) {74 await batch.commit();75 onProgress?.call(1.0);76 }7778 return totalImported;79}Common mistakes when building a Custom Data Import/Export Tool in FlutterFlow
Why it's a problem: Importing CSV rows one at a time with individual Firestore add() calls
How to avoid: Use WriteBatch and commit every 499 rows. This reduces 1000 individual calls to just 3 batch commits, completing in under 5 seconds on a normal connection.
Why it's a problem: Loading the entire Firestore collection into memory before exporting
How to avoid: Use Firestore query cursors to paginate through the collection in chunks of 500 documents, writing each chunk to a streaming file rather than accumulating everything in memory.
Why it's a problem: Not validating CSV data types before import
How to avoid: In the parseCSV action, add a validation pass that checks each mapped value against the expected type and surfaces a row-level error list before committing any data to Firestore.
Why it's a problem: Exposing the export Cloud Function without authentication checks
How to avoid: Always check context.auth in the Cloud Function and throw an HttpsError('unauthenticated') if the caller is not logged in.
Best practices
- Always chunk Firestore writes into batches of 499 or fewer — never write rows one at a time in a loop
- Provide a downloadable CSV template so users know the exact column order and naming before they upload
- Show row-level validation errors (e.g., 'Row 14: email field is empty') before committing any data
- Add an importedAt timestamp to every imported document so you can audit and roll back a bad import
- Store export files in Firebase Storage with a short-lived signed URL rather than streaming the file directly from the function
- Require user authentication before allowing any import or export operation
- Limit export to collections the authenticated user owns — never expose full collection exports to all users
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I am building a FlutterFlow app and need to import CSV files into Firestore. Write a Dart Custom Action that uses file_picker to open a CSV, parses it with the csv package, and writes all rows using WriteBatch in chunks of 499. Include a progress callback.
In my FlutterFlow project, add a Custom Action called batchImportToFirestore that accepts a JSON array of mapped row objects and a Firestore collection path string. Use WriteBatch to write all rows in groups of 499. Return the total count of imported records.
Frequently asked questions
Does FlutterFlow have a built-in CSV import widget?
No. FlutterFlow has no native file-import widget as of March 2026. You must use Custom Actions with the file_picker and csv Dart packages, which requires a Pro plan with code export enabled.
What is Firestore's batch write limit and why does it matter for imports?
Firestore enforces a hard limit of 500 operations per WriteBatch commit. If you try to add more than 500 documents in one batch call, the commit will fail with an error. Split your rows into groups of 499 (leave one operation of headroom) and commit each group separately.
Can I import Excel (.xlsx) files instead of CSV?
Yes, but it requires an additional Dart package such as excel: ^4.0.3. The process is similar — pick the file, parse it with the excel package, convert rows to maps, then batch-write to Firestore. CSV is simpler and recommended unless your users specifically need Excel format.
How do I handle duplicate records during import?
Use batch.set() with a document ID derived from a unique field in each row (e.g., the email or an external ID) instead of batch.set(collection.doc()). This way, re-importing the same CSV will overwrite duplicates rather than creating new documents.
Will this work on both iOS and Android?
Yes. The file_picker package supports iOS, Android, and web. However, web support for reading file bytes requires the withData: true parameter, which is already included in the example code above.
How large a CSV file can I import?
The practical limit depends on device memory. Files up to 10MB (roughly 50,000 rows) work reliably on most modern devices. For larger datasets, consider chunking the file server-side or using a Cloud Function to process the upload rather than parsing on-device.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation