Build a business card scanner in FlutterFlow by capturing a card image via camera, sending it to Google Cloud Vision TEXT_DETECTION via a Cloud Function that parses the OCR text for name, title, company, phone, and email using regex, then storing the structured contact in Firestore. Add vCard (.vcf) file export via the share_plus package and optional device contact sync using the contacts_service Flutter package. Post-process the OCR result in the Cloud Function — raw OCR text requires regex parsing to extract structured contact fields.
Scan business cards to contacts using OCR, regex parsing, and Firestore storage
A business card scanner automates the tedious process of manually entering contact information after networking events. The workflow: user points their camera at a business card, the app captures the image, sends it to Google Cloud Vision for OCR text extraction, parses the raw text to identify names, phone numbers, email addresses, job titles, and company names, then saves the structured contact to Firestore. The challenge is in the parsing step — OCR returns raw text blocks, not labeled fields. A Cloud Function applies regex patterns and contextual rules to identify what each piece of text represents. Users can review and edit the parsed contact before saving.
Prerequisites
- A FlutterFlow Pro+ project (Custom Actions required for camera capture and contact export)
- Firebase project with Cloud Functions enabled (Blaze plan required)
- Google Cloud project with Cloud Vision API enabled
- Camera permissions configured in FlutterFlow Settings → App Details → Permissions
Step-by-step guide
Capture the business card image with the camera
Capture the business card image with the camera
In FlutterFlow, create a Custom Action named captureBusinessCard. Add the image_picker package via Pubspec Dependencies. The action calls ImagePicker().pickImage(source: ImageSource.camera, imageQuality: 85, maxWidth: 1024). After capture, compress the image and convert it to base64 for transmission to the Cloud Function. Alternatively, add the cunning_document_scanner or document_scanner_flutter package for auto-edge detection and perspective correction — it straightens skewed card photos automatically. Return the base64-encoded image string to a Page State variable named cardImageBase64. Show a preview of the captured image and a Retake button so users can recapture if the image is blurry.
Expected result: The camera opens, the user captures a business card photo, and the base64-encoded image is stored in Page State ready for OCR processing.
Create the OCR and contact parsing Cloud Function
Create the OCR and contact parsing Cloud Function
Create a Cloud Function named parseBusinessCard. It accepts the base64-encoded card image. Call Google Cloud Vision: POST https://vision.googleapis.com/v1/images:annotate with request body containing the base64 image and features: [{type: TEXT_DETECTION}]. Vision returns the full text and individual text blocks. Extract all text: response.data.responses[0].fullTextAnnotation.text. Now parse it with regex: email = text.match(/[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}/), phone = text.match(/(\+?1?[-.\s]?\(?[0-9]{3}\)?[-.\s]?[0-9]{3}[-.\s]?[0-9]{4})/), website = text.match(/(?:https?:\/\/)?(?:www\.)?[a-zA-Z0-9-]+\.[a-zA-Z]{2,}/). For name extraction, use the first line of text (typically the person's name on cards). Title and company require heuristics based on common business card layouts.
1const functions = require('firebase-functions');2const axios = require('axios');3const { GoogleAuth } = require('google-auth-library');45exports.parseBusinessCard = functions.https.onRequest(async (req, res) => {6 res.set('Access-Control-Allow-Origin', '*');7 if (req.method === 'OPTIONS') return res.status(204).send('');89 const { imageBase64 } = req.body;10 if (!imageBase64) return res.status(400).json({ error: 'imageBase64 required' });1112 const auth = new GoogleAuth({ scopes: 'https://www.googleapis.com/auth/cloud-platform' });13 const client = await auth.getClient();14 const token = await client.getAccessToken();1516 const visionRes = await axios.post(17 'https://vision.googleapis.com/v1/images:annotate',18 { requests: [{ image: { content: imageBase64 }, features: [{ type: 'TEXT_DETECTION' }] }] },19 { headers: { Authorization: `Bearer ${token.token}` } }20 );2122 const text = visionRes.data.responses[0]?.fullTextAnnotation?.text || '';23 const lines = text.split('\n').map((l) => l.trim()).filter(Boolean);2425 const email = (text.match(/[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}/) || [])[0];26 const phone = (text.match(/(\+?1?[-.\s]?\(?[0-9]{3}\)?[-.\s]?[0-9]{3}[-.\s]?[0-9]{4})/) || [])[0];27 const website = (text.match(/(?:www\.)[a-zA-Z0-9-]+\.[a-zA-Z]{2,}/) || [])[0];2829 // Name heuristic: first non-empty line not matching email/phone/website30 const name = lines.find((l) => !l.match(/@|\.|[0-9]{3}/) && l.length > 2 && l.length < 50);3132 res.json({ name: name || '', email: email || '', phone: phone || '', website: website || '', rawText: text });33});Expected result: The Cloud Function returns structured contact fields extracted from the card image — name, email, phone, and website — ready for the user to review and save.
Build the contact review and save flow in FlutterFlow
Build the contact review and save flow in FlutterFlow
After the OCR Cloud Function returns results, populate a review form. Create Page State variables for each contact field: parsedName, parsedTitle, parsedCompany, parsedEmail, parsedPhone, parsedWebsite. Bind the Cloud Function response fields to these Page State variables via the Action Flow. Show a form with pre-filled TextFields bound to each Page State variable so users can correct parsing errors before saving. Add a Save Contact button. In its Action Flow: (1) Create Document in Firestore scanned_contacts collection with all fields plus cardImageUrl (upload the image to Firebase Storage and store the URL), userId (current user's UID), and scannedAt (server timestamp). (2) Show a success Snackbar. (3) Navigate to the contact detail page or back to the contacts list.
Expected result: Users see pre-filled contact fields from the OCR result, can edit any incorrect values, and save the final contact to Firestore with one tap.
Build the scanned contacts list with search
Build the scanned contacts list with search
Create a ScannedContacts page. Add a Backend Query loading the scanned_contacts collection filtered by userId == currentUser.uid, ordered by scannedAt descending. Add a search TextField bound to a Page State variable searchQuery. Add a second Backend Query with an additional filter for display when searching. Since Firestore does not support full-text search, implement client-side filtering: add a Custom Function named filterContacts that takes the full contacts list and the search query and returns only contacts where name, company, or email contains the search string (case-insensitive). Bind the ListView to the filtered result. Each contact card shows the card image thumbnail, name, title, company, and email.
Expected result: The contacts list shows all scanned contacts sorted by most recent. The search TextField filters contacts by name, company, or email in real time.
Add vCard export and device contact sync
Add vCard export and device contact sync
On the contact detail page, add an Export Contact button. Create a Custom Action named exportVCard. Generate the vCard format string: BEGIN:VCARD\nVERSION:3.0\nFN:{name}\nORG:{company}\nTITLE:{title}\nTEL:{phone}\nEMAIL:{email}\nURL:{website}\nEND:VCARD. Use the path_provider package to write this string to a temporary file (e.g., contact.vcf in the temp directory). Then use share_plus to share the file: Share.shareXFiles([XFile(path)], subject: 'Contact: {name}'). This opens the native iOS/Android share sheet where users can share to Mail, Messages, WhatsApp, or save to their contacts app. For direct device contact sync without the share sheet, add the contacts_service or flutter_contacts package and call ContactsService.addContact() with the parsed fields.
Expected result: Tapping Export Contact opens the native share sheet with the vCard file. Users can save it directly to their device contacts, share it via messages, or email it to themselves.
Complete working example
1import 'dart:io';2import 'package:path_provider/path_provider.dart';3import 'package:share_plus/share_plus.dart';45// Custom Action: exportVCard6// Parameters: name, title, company, phone, email, website (all String)7Future<void> exportVCard(8 String name,9 String title,10 String company,11 String phone,12 String email,13 String website,14) async {15 final vCard = [16 'BEGIN:VCARD',17 'VERSION:3.0',18 if (name.isNotEmpty) 'FN:$name',19 if (company.isNotEmpty) 'ORG:$company',20 if (title.isNotEmpty) 'TITLE:$title',21 if (phone.isNotEmpty) 'TEL;TYPE=WORK,VOICE:$phone',22 if (email.isNotEmpty) 'EMAIL;TYPE=INTERNET:$email',23 if (website.isNotEmpty) 'URL:$website',24 'END:VCARD',25 ].join('\n');2627 final dir = await getTemporaryDirectory();28 final safeName = name.replaceAll(RegExp(r'[^\w]'), '_');29 final file = File('${dir.path}/${safeName}_contact.vcf');30 await file.writeAsString(vCard);3132 await Share.shareXFiles(33 [XFile(file.path, mimeType: 'text/vcard')],34 subject: 'Contact: $name',35 );36}Common mistakes
Why it's a problem: Displaying raw OCR text to the user instead of structured contact fields
How to avoid: Always run the OCR output through the regex parsing logic in the Cloud Function. Present structured fields (name, title, company, phone, email) in editable TextFields so users can correct any parsing errors before saving.
Why it's a problem: Calling Google Cloud Vision API directly from the FlutterFlow app with an API key in the request headers
How to avoid: Always proxy Cloud Vision calls through a Cloud Function. The Cloud Function authenticates using Application Default Credentials (no key file needed in Cloud Functions environment) and is never exposed to client code.
Why it's a problem: Assuming OCR text order always matches business card layout (name first, title second)
How to avoid: Use regex to extract email, phone, and website first (unique patterns), then use proximity heuristics or a dedicated business card API (like Mindee or Azure Form Recognizer) for name and title extraction. Always show parsed results for user review and correction.
Best practices
- Always show parsed contact fields in an editable review form before saving — OCR parsing is imperfect and users must be able to correct mistakes
- Upload the original card image to Firebase Storage and store the URL on the contact document so users can refer to the original card later
- Add duplicate detection before saving — check if a contact with the same email already exists in scanned_contacts and prompt the user to update rather than create a duplicate
- Add note-taking capability on each contact for context from the networking event: 'Met at FlutterFlow Summit 2026, interested in B2B pricing'
- Implement a tag system for contacts (e.g., conference name, industry) to enable filtering and segmentation of the contact list
- Consider using Mindee's Business Card API or Azure Form Recognizer instead of raw Vision API for higher-accuracy structured extraction — they are trained specifically on business card layouts
- Request camera permission before the user taps the scan button with an explanation dialog — iOS denial of the initial permission requires the user to go to Settings manually
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I am building a business card scanner in FlutterFlow. Write a Firebase Cloud Function in Node.js that: (1) accepts a base64-encoded business card image, (2) calls Google Cloud Vision TEXT_DETECTION to extract all text, (3) uses regex to extract email address, phone number, website URL, and LinkedIn profile URL, (4) uses heuristics to identify the person's name (typically the largest text or first prominent line) and company name, and (5) returns a structured JSON object with name, title, company, phone, email, website, and linkedin fields. Handle cases where fields are missing.
Add a business card scanner to my FlutterFlow app. Create a Scan Card button that opens the camera, captures the image, encodes it to base64, calls my parseBusinessCard Cloud Function API, and populates a review form with the returned contact fields. Add a Save button that creates a document in my scanned_contacts Firestore collection and shows a success message.
Frequently asked questions
How accurate is business card OCR in FlutterFlow?
Google Cloud Vision TEXT_DETECTION achieves 95%+ accuracy on clear, well-lit business card photos with standard fonts. Accuracy drops for cards with decorative fonts, dark backgrounds, glare, or low image quality. The parsing step (identifying which text is name vs. title vs. company) is the main source of errors, not the OCR itself. Always show parsed results for user review and correction.
Can I scan business cards in multiple languages?
Google Cloud Vision handles 50+ languages automatically — you do not need to specify the language. For Japanese, Chinese, Arabic, and other non-Latin scripts, OCR accuracy is slightly lower but generally usable. The regex parsing step for email and phone works universally across languages since those formats are standardized.
How do I add scanned contacts to the user's phone address book?
Add the flutter_contacts package via FlutterFlow's Pubspec Dependencies. Create a Custom Action that calls flutter_contacts.FlutterContacts.requestPermission(), then creates a Contact object with the parsed fields and calls FlutterContacts.insertContact(contact). This adds the contact to the native iOS Contacts app or Android Contacts app directly without requiring any share action from the user.
Can I use this to scan QR code business cards?
Yes, but QR codes require a different approach. Use the google_mlkit_barcode_scanning Flutter package instead of Cloud Vision. QR code business cards often encode contact information in vCard format directly in the QR data, so no regex parsing is needed — the decoded QR text is already structured. Add a mode toggle on the scan screen: Camera (for printed cards) and QR (for digital QR cards).
How do I prevent duplicate contacts when scanning the same card twice?
After OCR extraction, before saving to Firestore, query the scanned_contacts collection for a document where email == parsedEmail (if available) or phone == parsedPhone. If a match exists, show a dialog: 'You already have a contact for {name}. Update the existing contact or save as new?' This prevents duplicates from accidental double-scans.
What if I need a more accurate business card parsing than regex alone?
For production-grade business card scanning with higher accuracy, use dedicated APIs like Mindee's Business Card extraction (mindee.com) or Azure Form Recognizer which are trained on millions of business card layouts and return structured fields with confidence scores. These require Cloud Function integration but provide significantly better results than regex parsing on raw OCR text. RapidDev can integrate these APIs into your FlutterFlow app.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation