Face recognition authentication in FlutterFlow uses a two-phase approach: during enrollment, capture a photo and send it to a Cloud Function that calls Google Cloud Vision API to extract a face embedding vector, which is stored in Firestore. During login, capture a live photo, extract a new embedding, and compare it with the stored vector using cosine similarity — accept if the score exceeds 0.85.
Two-Phase Face Recognition System
Building face recognition in FlutterFlow requires combining the device camera, Firebase Cloud Functions, and Google Cloud Vision API. The system has two distinct phases: enrollment (run once per user at signup) and authentication (run on every login). The critical insight is that you should never store raw face images for comparison — instead, store mathematical face embeddings (arrays of 128 floating-point numbers). This is both more privacy-respecting and far more accurate for matching.
Prerequisites
- FlutterFlow project connected to Firebase (Firestore and Authentication enabled)
- Firebase project with Cloud Functions enabled (Blaze plan required)
- Google Cloud Vision API enabled in your Google Cloud Console project
- Basic familiarity with FlutterFlow's Action Flows and Custom Actions
Step-by-step guide
Set Up the Firestore Schema for Face Data
Set Up the Firestore Schema for Face Data
In your Firebase Console, open Firestore and navigate to your users collection (or create one). Each user document needs two new fields: 'faceEmbedding' (type: Array of numbers, storing 128 floats) and 'faceEnrolled' (type: Boolean, default false). In FlutterFlow, open Settings → Firebase → Firestore Schema and add these fields to your User data type. The faceEmbedding array will hold the numerical representation of the user's face. Never store the actual face photo URL for authentication — the embedding is the authentication credential. Set Firestore security rules to ensure only the authenticated user can read their own faceEmbedding field.
Expected result: Your Firestore users collection has faceEmbedding (array) and faceEnrolled (boolean) fields visible in the FlutterFlow schema editor.
Deploy the Face Embedding Cloud Function
Deploy the Face Embedding Cloud Function
This Cloud Function receives a base64-encoded photo, calls Google Cloud Vision API's face detection endpoint, extracts landmark coordinates, and returns a normalized embedding vector. The function runs server-side so your Cloud Vision API key never touches the client app. Deploy it to Firebase Cloud Functions using the Firebase CLI from your terminal. In FlutterFlow, go to the API Manager (left sidebar, cloud icon) and add the function as an authenticated API call. The function endpoint will be something like 'https://us-central1-yourproject.cloudfunctions.net/extractFaceEmbedding'. Set the Authorization header to use the Firebase ID token from the current user.
1// Firebase Cloud Function: extractFaceEmbedding2// Deploy via: firebase deploy --only functions:extractFaceEmbedding34const functions = require('firebase-functions');5const vision = require('@google-cloud/vision');6const admin = require('firebase-admin');78const visionClient = new vision.ImageAnnotatorClient();910exports.extractFaceEmbedding = functions.https.onCall(async (data, context) => {11 if (!context.auth) {12 throw new functions.https.HttpsError('unauthenticated', 'Login required');13 }1415 const { imageBase64 } = data;16 if (!imageBase64) {17 throw new functions.https.HttpsError('invalid-argument', 'imageBase64 required');18 }1920 const [result] = await visionClient.faceDetection({21 image: { content: imageBase64 },22 });2324 const faces = result.faceAnnotations;25 if (!faces || faces.length === 0) {26 throw new functions.https.HttpsError('not-found', 'No face detected in image');27 }28 if (faces.length > 1) {29 throw new functions.https.HttpsError('invalid-argument', 'Multiple faces detected');30 }3132 const face = faces[0];33 // Build a 128-element embedding from landmark positions34 const landmarks = face.landmarks || [];35 const embedding = [];36 for (const landmark of landmarks) {37 embedding.push(landmark.position.x / 1000.0);38 embedding.push(landmark.position.y / 1000.0);39 embedding.push(landmark.position.z / 1000.0);40 }41 // Pad or truncate to exactly 128 dimensions42 while (embedding.length < 128) embedding.push(0.0);43 const normalized = embedding.slice(0, 128);4445 return { embedding: normalized };46});Expected result: The Cloud Function deploys successfully and returns a 128-element array when given a photo containing exactly one face.
Build the Enrollment Custom Action
Build the Enrollment Custom Action
In FlutterFlow, go to Custom Code → Custom Actions → '+'. Name it 'enrollFace'. This action will: (1) capture a photo using the camera, (2) convert it to base64, (3) call the Cloud Function, (4) save the returned embedding to the user's Firestore document. Use the 'image_picker' package (enable in Settings → Pubspec Dependencies) to open the camera. Convert the image bytes to base64 with dart:convert. Call the Cloud Function using the Firebase callable functions SDK. On success, update the Firestore user document with the returned embedding array and set faceEnrolled to true. Wire this action to an 'Enroll Face' button on your profile or signup page.
1// Custom Action: enrollFace2// Packages required: image_picker, firebase_core, cloud_functions3// Return type: String (success message or error)45Future<String> enrollFace() async {6 final picker = ImagePicker();7 final XFile? photo = await picker.pickImage(8 source: ImageSource.camera,9 preferredCameraDevice: CameraDevice.front,10 imageQuality: 85,11 );1213 if (photo == null) return 'Cancelled';1415 final bytes = await photo.readAsBytes();16 final base64Image = base64Encode(bytes);1718 final functions = FirebaseFunctions.instance;19 final callable = functions.httpsCallable('extractFaceEmbedding');2021 try {22 final result = await callable.call({'imageBase64': base64Image});23 final embedding = List<double>.from(24 (result.data['embedding'] as List).map((e) => (e as num).toDouble()),25 );2627 final uid = FirebaseAuth.instance.currentUser!.uid;28 await FirebaseFirestore.instance.collection('users').doc(uid).update({29 'faceEmbedding': embedding,30 'faceEnrolled': true,31 'faceEnrolledAt': FieldValue.serverTimestamp(),32 });3334 return 'Face enrolled successfully';35 } on FirebaseFunctionsException catch (e) {36 return 'Enrollment failed: ${e.message}';37 }38}Expected result: After tapping 'Enroll Face' and taking a selfie, the user's Firestore document shows faceEnrolled: true and a faceEmbedding array with 128 values.
Build the Authentication Custom Action
Build the Authentication Custom Action
Create a second Custom Action named 'authenticateWithFace'. This action: (1) opens the front camera, (2) captures a photo, (3) calls the Cloud Function to get a live embedding, (4) fetches the stored embedding from Firestore, (5) computes cosine similarity between the two vectors, (6) returns true if similarity exceeds 0.85. Cosine similarity is calculated as the dot product of two unit vectors — a score of 1.0 means identical, 0.85 is a good threshold balancing security and usability. Wire this action to your login page's 'Login with Face' button. Chain it with a conditional: if the action returns true, navigate to the home page; if false, show a Snackbar error and increment a failed attempts counter in App State.
1// Custom Action: authenticateWithFace2// Return type: bool34Future<bool> authenticateWithFace() async {5 final picker = ImagePicker();6 final XFile? photo = await picker.pickImage(7 source: ImageSource.camera,8 preferredCameraDevice: CameraDevice.front,9 imageQuality: 85,10 );1112 if (photo == null) return false;1314 final bytes = await photo.readAsBytes();15 final base64Image = base64Encode(bytes);1617 final functions = FirebaseFunctions.instance;18 final callable = functions.httpsCallable('extractFaceEmbedding');1920 try {21 final result = await callable.call({'imageBase64': base64Image});22 final liveEmbedding = List<double>.from(23 (result.data['embedding'] as List).map((e) => (e as num).toDouble()),24 );2526 final uid = FirebaseAuth.instance.currentUser!.uid;27 final doc = await FirebaseFirestore.instance28 .collection('users')29 .doc(uid)30 .get();3132 final storedEmbedding = List<double>.from(33 (doc.data()!['faceEmbedding'] as List).map((e) => (e as num).toDouble()),34 );3536 final similarity = _cosineSimilarity(liveEmbedding, storedEmbedding);37 return similarity >= 0.85;38 } catch (e) {39 return false;40 }41}4243double _cosineSimilarity(List<double> a, List<double> b) {44 double dot = 0, normA = 0, normB = 0;45 for (int i = 0; i < a.length; i++) {46 dot += a[i] * b[i];47 normA += a[i] * a[i];48 normB += b[i] * b[i];49 }50 if (normA == 0 || normB == 0) return 0;51 return dot / (sqrt(normA) * sqrt(normB));52}Expected result: The action returns true when the live face matches the enrolled face (similarity >= 0.85) and false otherwise.
Add Lockout Logic After Failed Attempts
Add Lockout Logic After Failed Attempts
In FlutterFlow's App State, create two variables: 'faceFailedAttempts' (integer, initial value 0) and 'faceLockoutUntil' (DateTime, nullable). In the Action Flow for the 'Login with Face' button, first check if faceLockoutUntil is set and in the future — if so, show a Snackbar with the remaining lockout time and stop. If authenticateWithFace returns false, increment faceFailedAttempts. Add a conditional: if faceFailedAttempts >= 3, set faceLockoutUntil to 30 minutes from now using a Custom Function that returns DateTime.now().add(Duration(minutes: 30)), then reset faceFailedAttempts to 0. On successful authentication, reset both variables. This prevents brute-force photo attacks.
Expected result: After 3 failed face authentication attempts, the login button shows a '30-minute lockout' message and the authentication action is blocked.
Complete working example
1// ============================================================2// FlutterFlow Face Recognition Authentication — Complete3// ============================================================4// Requires packages: image_picker, cloud_functions, firebase_auth,5// cloud_firestore, dart:convert, dart:math67// --- ENROLLMENT ---8Future<String> enrollFace() async {9 final picker = ImagePicker();10 final XFile? photo = await picker.pickImage(11 source: ImageSource.camera,12 preferredCameraDevice: CameraDevice.front,13 imageQuality: 85,14 );15 if (photo == null) return 'Cancelled';1617 final bytes = await photo.readAsBytes();18 final base64Image = base64Encode(bytes);1920 try {21 final callable = FirebaseFunctions.instance22 .httpsCallable('extractFaceEmbedding');23 final result = await callable.call({'imageBase64': base64Image});24 final embedding = List<double>.from(25 (result.data['embedding'] as List).map((e) => (e as num).toDouble()),26 );27 final uid = FirebaseAuth.instance.currentUser!.uid;28 await FirebaseFirestore.instance.collection('users').doc(uid).update({29 'faceEmbedding': embedding,30 'faceEnrolled': true,31 'faceEnrolledAt': FieldValue.serverTimestamp(),32 });33 return 'Enrolled successfully';34 } on FirebaseFunctionsException catch (e) {35 return 'Error: ${e.message}';36 }37}3839// --- AUTHENTICATION ---40Future<bool> authenticateWithFace() async {41 final picker = ImagePicker();42 final XFile? photo = await picker.pickImage(43 source: ImageSource.camera,44 preferredCameraDevice: CameraDevice.front,45 imageQuality: 85,46 );47 if (photo == null) return false;4849 final bytes = await photo.readAsBytes();50 final base64Image = base64Encode(bytes);5152 try {53 final callable = FirebaseFunctions.instance54 .httpsCallable('extractFaceEmbedding');55 final result = await callable.call({'imageBase64': base64Image});56 final liveEmbedding = List<double>.from(57 (result.data['embedding'] as List).map((e) => (e as num).toDouble()),58 );59 final uid = FirebaseAuth.instance.currentUser!.uid;60 final doc = await FirebaseFirestore.instance61 .collection('users').doc(uid).get();62 final storedEmbedding = List<double>.from(63 (doc.data()!['faceEmbedding'] as List)64 .map((e) => (e as num).toDouble()),65 );66 return _cosineSimilarity(liveEmbedding, storedEmbedding) >= 0.85;67 } catch (_) {68 return false;69 }70}7172double _cosineSimilarity(List<double> a, List<double> b) {73 double dot = 0, normA = 0, normB = 0;74 for (int i = 0; i < a.length; i++) {75 dot += a[i] * b[i];76 normA += a[i] * a[i];77 normB += b[i] * b[i];78 }79 if (normA == 0 || normB == 0) return 0;80 return dot / (sqrt(normA) * sqrt(normB));81}Common mistakes
Why it's a problem: Storing raw face images in Firestore instead of embedding vectors
How to avoid: Always extract and store only the mathematical embedding — an array of 128 floating-point numbers. This is tiny (about 1KB), privacy-respecting, and designed for fast similarity comparison. The Cloud Function extracts the embedding and discards the image.
Why it's a problem: Setting the similarity threshold too low (e.g., 0.6) to reduce false rejections
How to avoid: Keep the threshold at 0.85 or higher for authentication use cases. If users frequently fail with legitimate faces, improve the enrollment photo quality (better lighting instructions) rather than lowering the threshold.
Why it's a problem: Calling the Cloud Vision API directly from the Flutter client app
How to avoid: Always call the API from a Firebase Cloud Function. The client authenticates to the Cloud Function using a Firebase ID token, and the Cloud Function uses the API key from its server environment. The key never reaches the device.
Why it's a problem: Not handling the 'no face detected' error from Cloud Vision API
How to avoid: Wrap the Cloud Function call in a try/catch block. Show a user-friendly message like 'No face detected — please try again in better lighting' when the Cloud Function returns a not-found error.
Why it's a problem: Skipping camera permission requests on Android before calling image_picker
How to avoid: Use the permission_handler package in a Custom Action to request camera permission before calling image_picker. Check the permission status and show an explanation dialog if the user previously denied it.
Best practices
- Always combine face recognition with at least one other factor (password or OTP) — face recognition alone is not sufficient for high-security use cases due to spoofing risks with printed photos.
- Provide clear lighting guidance to users during enrollment: 'Face the screen directly, ensure your face is well-lit, remove glasses or hats'.
- Allow users to re-enroll their face from the settings screen — face appearance changes over time and embeddings become stale.
- Log all authentication attempts (success/failure timestamp, attempt count) to Firestore for security auditing, without logging the actual embedding or photo.
- Implement a server-side lockout in Firestore, not just client-side App State — client state can be reset by killing and relaunching the app.
- Use front camera only (preferredCameraDevice: CameraDevice.front) for consistent enrollment and authentication conditions.
- Test on multiple devices and in varied lighting conditions before launching — face recognition accuracy varies significantly by camera quality and ambient light.
- If you use RapidDev for custom face recognition implementations, ensure your Cloud Function architecture is reviewed for the specific GDPR and CCPA obligations that apply to biometric data in your jurisdiction.
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I am building a face recognition authentication system for a FlutterFlow app using Firebase Cloud Functions and Google Cloud Vision API. The system has an enrollment phase and an authentication phase using cosine similarity on face embeddings. Write the Firebase Cloud Function in Node.js that accepts a base64 image, calls Cloud Vision face detection, extracts a normalized 128-element embedding from facial landmarks, and returns it to the Flutter client.
Write a FlutterFlow Custom Action in Dart called authenticateWithFace that opens the front camera, captures a photo, calls a Firebase callable function named extractFaceEmbedding, retrieves the stored embedding from the user's Firestore document, computes cosine similarity, and returns true if similarity is 0.85 or above.
Frequently asked questions
How accurate is face recognition via Google Cloud Vision API compared to dedicated face recognition services?
Cloud Vision provides face detection and landmark positions, not a dedicated face embedding model like FaceNet or DeepFace. The landmark-based embeddings in this tutorial have moderate accuracy (suitable for low-risk authentication) but not the 99.9%+ accuracy of dedicated services. For high-security applications, use a Cloud Run container running a FaceNet model instead.
Does face recognition work offline in FlutterFlow apps?
No, this implementation requires an internet connection to call the Firebase Cloud Function which calls Cloud Vision. For offline face recognition, you would need to integrate ML Kit's on-device face detection, but ML Kit does not provide face embeddings — only landmark positions and bounding boxes.
Is it legal to store face embeddings in my app?
This depends on your jurisdiction. In the EU (GDPR), face embeddings are considered biometric data and require explicit user consent, a legitimate purpose, and a Data Processing Agreement with Firebase/Google. In Illinois (BIPA), strict requirements apply. Always consult a legal professional before launching biometric authentication to users.
What threshold should I use for cosine similarity in face authentication?
0.85 is a good starting point for the landmark-based embeddings from Cloud Vision. If you see too many false rejections (legitimate users failing), increase lighting quality guidance rather than lowering the threshold. If using FaceNet embeddings, the threshold for Euclidean distance comparison (not cosine) is different — typically below 1.0 for a match.
Can users spoof this system with a photo on their phone screen?
Yes — the basic implementation in this tutorial has no liveness detection, meaning a printed photo or screen photo could potentially match. For higher security, add liveness detection (blink detection, head movement) or use a dedicated anti-spoofing model. See the 'How to Implement Facial Recognition for Enhanced Security' tutorial for the hardened implementation.
How do I test the face recognition system without deploying to a real device?
You cannot test camera capture in FlutterFlow's browser-based Run Mode. Export the project code (Pro plan required), open it in Android Studio or VS Code, and run it on a connected physical device or emulator with camera emulation enabled. The Cloud Function can be tested independently using the Firebase Emulator Suite.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation