Skip to main content
RapidDev - Software Development Agency
flutterflow-tutorials

How to Implement a Virtual Try-On Feature for Products in FlutterFlow

Build a virtual try-on experience in FlutterFlow using google_mlkit_face_detection to detect face landmarks in a live camera feed. Overlay product images at the correct facial anchor points (eyes for glasses, forehead for hats) using a Stack widget with Custom Widgets. Capture a screenshot using the screenshot package and share it. This approach works reliably for accessories — not full-body clothing.

What you'll learn

  • How to use google_mlkit_face_detection to get face landmark coordinates in real time
  • How to overlay product images at accurate facial anchor points using a Custom Widget
  • How to implement real-time face tracking so the overlay moves with the user's head
  • How to capture a screenshot of the try-on result and share it using the share_plus package
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner10 min read50-60 minFlutterFlow Pro+ (code export required for Custom Widgets and Camera access)March 2026RapidDev Engineering Team
TL;DR

Build a virtual try-on experience in FlutterFlow using google_mlkit_face_detection to detect face landmarks in a live camera feed. Overlay product images at the correct facial anchor points (eyes for glasses, forehead for hats) using a Stack widget with Custom Widgets. Capture a screenshot using the screenshot package and share it. This approach works reliably for accessories — not full-body clothing.

What AR Try-On Can Do in FlutterFlow

Virtual try-on dramatically reduces return rates for accessory products — studies show 30-40% fewer returns when customers can visualize items before purchase. For glasses, hats, earrings, and face-applied cosmetics, Flutter's ML Kit integration provides accurate face mesh data including 468 face landmarks. In FlutterFlow, this is implemented via a Custom Widget that renders a live camera preview, runs face detection, and draws product overlays using a CustomPainter or Stack-based approach. The result is a native iOS and Android try-on feature built without any third-party AR SDK fees.

Prerequisites

  • A FlutterFlow project with code export capability (Pro plan or higher)
  • Physical iOS or Android test device — the iOS Simulator and Android Emulator do not support camera or ML Kit
  • Product images with transparent backgrounds (PNG format) hosted in Firebase Storage
  • Basic familiarity with Flutter Custom Widgets in FlutterFlow

Step-by-step guide

1

Add required packages to your exported Flutter project

Export your FlutterFlow project and open it in VS Code or Android Studio. In pubspec.yaml, add these dependencies: google_mlkit_face_detection (^0.11.0), camera (^0.10.5), screenshot (^2.1.0), share_plus (^9.0.0), and cached_network_image (^3.3.1). For iOS, open ios/Podfile and ensure the platform is set to iOS 15.0 or higher. In ios/Runner/Info.plist, add NSCameraUsageDescription with value 'Used for virtual try-on'. For Android, in android/app/src/main/AndroidManifest.xml add the CAMERA permission. Run flutter pub get to install all packages before proceeding.

Expected result: flutter pub get completes without errors and all packages appear in pubspec.lock.

2

Create the FaceDetectionCamera Custom Widget

In FlutterFlow's Custom Code panel, create a new Custom Widget named 'FaceDetectionCamera'. This widget accepts two parameters: productImageUrl (String) and productType (String — 'glasses', 'hat', or 'earrings'). The widget initializes the device camera using the camera package, creates a CameraController, and streams frames to the ML Kit FaceDetector. The FaceDetector should be configured with: FaceDetectorOptions(enableLandmarks: true, performanceMode: FaceDetectorMode.accurate). On each frame, detect faces and extract the key landmark coordinates. Use a Stack to layer the camera preview, a CustomPainter overlay for debugging landmark positions, and the product image positioned using the landmark data.

face_detection_camera_widget.dart
1// Key portion of FaceDetectionCamera Custom Widget
2import 'package:google_mlkit_face_detection/google_mlkit_face_detection.dart';
3import 'package:camera/camera.dart';
4
5class FaceDetectionCameraWidget extends StatefulWidget {
6 final String productImageUrl;
7 final String productType;
8 const FaceDetectionCameraWidget({
9 Key? key,
10 required this.productImageUrl,
11 required this.productType,
12 }) : super(key: key);
13
14 @override
15 State<FaceDetectionCameraWidget> createState() =>
16 _FaceDetectionCameraWidgetState();
17}
18
19class _FaceDetectionCameraWidgetState
20 extends State<FaceDetectionCameraWidget> {
21 CameraController? _controller;
22 List<Face> _faces = [];
23 final FaceDetector _detector = FaceDetector(
24 options: FaceDetectorOptions(
25 enableLandmarks: true,
26 performanceMode: FaceDetectorMode.accurate,
27 ),
28 );
29
30 @override
31 void initState() {
32 super.initState();
33 _initCamera();
34 }
35
36 Future<void> _initCamera() async {
37 final cameras = await availableCameras();
38 final front = cameras.firstWhere(
39 (c) => c.lensDirection == CameraLensDirection.front,
40 );
41 _controller = CameraController(front, ResolutionPreset.high);
42 await _controller!.initialize();
43 _controller!.startImageStream(_processFrame);
44 if (mounted) setState(() {});
45 }
46
47 void _processFrame(CameraImage image) async {
48 // Convert CameraImage to InputImage then run detector
49 // (full conversion code omitted for brevity)
50 final faces = await _detector.processImage(inputImage);
51 if (mounted) setState(() => _faces = faces);
52 }
53
54 @override
55 Widget build(BuildContext context) {
56 if (_controller == null || !_controller!.value.isInitialized) {
57 return const Center(child: CircularProgressIndicator());
58 }
59 return Stack(children: [
60 CameraPreview(_controller!),
61 if (_faces.isNotEmpty)
62 ProductOverlay(
63 face: _faces.first,
64 productImageUrl: widget.productImageUrl,
65 productType: widget.productType,
66 ),
67 ]);
68 }
69}

Expected result: The Custom Widget renders in FlutterFlow's canvas as a placeholder; on a real device, it shows the live front camera feed.

3

Calculate product overlay position from face landmarks

Create a ProductOverlay widget that takes a Face object and product parameters. For glasses, use the LEFT_EYE and RIGHT_EYE landmarks to determine the eye midpoint X coordinate and Y coordinate. The overlay width should be 2.5x the distance between the two eyes (interpupillary distance). For hats, use the LEFT_EAR and RIGHT_EAR landmarks for width, and position vertically above the TOP_OF_HEAD landmark if available, otherwise estimate from forehead position. For earrings, position near LEFT_EAR_TIP and RIGHT_EAR_TIP landmarks. Use a Positioned widget inside a Stack to place the product image at the calculated coordinates. Apply a Transform.rotate using the face's headEulerAngleZ value so the overlay tilts with head rotation.

product_overlay.dart
1// ProductOverlay positions product image on face landmarks
2class ProductOverlay extends StatelessWidget {
3 final Face face;
4 final String productImageUrl;
5 final String productType;
6 const ProductOverlay({
7 required this.face,
8 required this.productImageUrl,
9 required this.productType,
10 });
11
12 @override
13 Widget build(BuildContext context) {
14 final leftEye = face.landmarks[FaceLandmarkType.leftEye];
15 final rightEye = face.landmarks[FaceLandmarkType.rightEye];
16 if (leftEye == null || rightEye == null) return const SizedBox();
17
18 final eyeDist = (rightEye.position.x - leftEye.position.x).abs();
19 final overlayWidth = eyeDist * 2.5;
20 final midX = (leftEye.position.x + rightEye.position.x) / 2;
21 final midY = (leftEye.position.y + rightEye.position.y) / 2;
22 final tiltAngle = (face.headEulerAngleZ ?? 0) * (3.14159 / 180);
23
24 return Positioned(
25 left: midX - overlayWidth / 2,
26 top: midY - overlayWidth * 0.25,
27 child: Transform.rotate(
28 angle: -tiltAngle,
29 child: CachedNetworkImage(
30 imageUrl: productImageUrl,
31 width: overlayWidth,
32 fit: BoxFit.contain,
33 ),
34 ),
35 );
36 }
37}

Expected result: The glasses or hat image appears anchored to the face in the camera preview and tracks head movement in real time.

4

Add product selector row below the camera view

Below the FaceDetectionCamera widget in your page layout, add a horizontal ListView or SingleChildScrollView containing product thumbnail cards. Each card shows a product image and name. When a user taps a product, update a Page State variable 'selectedProductUrl' (String) which is passed into the FaceDetectionCamera widget's productImageUrl parameter. Rebuild the widget when this value changes so the overlay updates to the new product immediately. Also add a 'productType' selector (using SegmentedButton for 'Glasses', 'Hats', 'Earrings') that updates a 'selectedProductType' Page State variable.

Expected result: Tapping a product thumbnail switches the overlay on the face to that product image within one frame refresh.

5

Implement screenshot capture and sharing

Wrap the camera Stack widget in a RepaintBoundary widget with a GlobalKey. Add a 'Capture and Share' button below the product selector. On button tap, call a Custom Action named 'captureAndShare'. This action uses the screenshot package's ScreenshotController to capture the RepaintBoundary as a PNG byte array, saves it to the device's temporary directory using path_provider, then calls Share.shareXFiles from share_plus with the saved file path. Add a brief 3-second countdown timer shown via a Page State variable before the screenshot is taken, so users can pose.

capture_and_share.dart
1// Custom Action: captureAndShare
2import 'dart:io';
3import 'package:screenshot/screenshot.dart';
4import 'package:share_plus/share_plus.dart';
5import 'package:path_provider/path_provider.dart';
6
7Future<void> captureAndShare(
8 ScreenshotController screenshotController,
9) async {
10 final imageBytes = await screenshotController.capture(
11 pixelRatio: 2.0,
12 );
13 if (imageBytes == null) return;
14 final dir = await getTemporaryDirectory();
15 final file = await File(
16 '${dir.path}/try_on_${DateTime.now().millisecondsSinceEpoch}.png',
17 ).writeAsBytes(imageBytes);
18 await Share.shareXFiles(
19 [XFile(file.path)],
20 text: 'Check out how I look with this!',
21 );
22}

Expected result: Tapping 'Capture and Share' saves a PNG of the try-on view and opens the native share sheet on iOS and Android.

Complete working example

virtual_try_on_page.dart
1// Virtual Try-On Page — exported Flutter code
2// This is the scaffold for the full try-on page
3import 'package:flutter/material.dart';
4import 'package:screenshot/screenshot.dart';
5
6class VirtualTryOnPage extends StatefulWidget {
7 const VirtualTryOnPage({Key? key}) : super(key: key);
8
9 @override
10 State<VirtualTryOnPage> createState() => _VirtualTryOnPageState();
11}
12
13class _VirtualTryOnPageState extends State<VirtualTryOnPage> {
14 final ScreenshotController _screenshotController = ScreenshotController();
15 String _selectedProductUrl = '';
16 String _selectedProductType = 'glasses';
17 int _countdown = 0;
18
19 final List<Map<String, String>> _products = [
20 {'url': 'https://example.com/glasses1.png', 'type': 'glasses', 'name': 'Classic Frames'},
21 {'url': 'https://example.com/glasses2.png', 'type': 'glasses', 'name': 'Aviators'},
22 {'url': 'https://example.com/hat1.png', 'type': 'hat', 'name': 'Baseball Cap'},
23 {'url': 'https://example.com/hat2.png', 'type': 'hat', 'name': 'Beanie'},
24 ];
25
26 void _selectProduct(Map<String, String> product) {
27 setState(() {
28 _selectedProductUrl = product['url']!;
29 _selectedProductType = product['type']!;
30 });
31 }
32
33 Future<void> _startCountdownAndCapture() async {
34 for (int i = 3; i > 0; i--) {
35 setState(() => _countdown = i);
36 await Future.delayed(const Duration(seconds: 1));
37 }
38 setState(() => _countdown = 0);
39 // Call captureAndShare Custom Action here
40 }
41
42 @override
43 Widget build(BuildContext context) {
44 return Scaffold(
45 backgroundColor: Colors.black,
46 body: Column(
47 children: [
48 Expanded(
49 child: Screenshot(
50 controller: _screenshotController,
51 child: Stack(
52 children: [
53 FaceDetectionCameraWidget(
54 productImageUrl: _selectedProductUrl,
55 productType: _selectedProductType,
56 ),
57 if (_countdown > 0)
58 Center(
59 child: Text(
60 '$_countdown',
61 style: const TextStyle(
62 color: Colors.white,
63 fontSize: 96,
64 fontWeight: FontWeight.bold,
65 ),
66 ),
67 ),
68 ],
69 ),
70 ),
71 ),
72 SizedBox(
73 height: 120,
74 child: ListView.builder(
75 scrollDirection: Axis.horizontal,
76 itemCount: _products.length,
77 itemBuilder: (ctx, i) => GestureDetector(
78 onTap: () => _selectProduct(_products[i]),
79 child: Container(
80 margin: const EdgeInsets.all(8),
81 padding: const EdgeInsets.all(4),
82 decoration: BoxDecoration(
83 border: Border.all(
84 color: _selectedProductUrl == _products[i]['url']
85 ? Colors.white
86 : Colors.transparent,
87 width: 2,
88 ),
89 borderRadius: BorderRadius.circular(8),
90 ),
91 child: Image.network(_products[i]['url']!, height: 80),
92 ),
93 ),
94 ),
95 ),
96 Padding(
97 padding: const EdgeInsets.all(16),
98 child: ElevatedButton.icon(
99 onPressed: _startCountdownAndCapture,
100 icon: const Icon(Icons.camera_alt),
101 label: const Text('Capture and Share'),
102 ),
103 ),
104 ],
105 ),
106 );
107 }
108}

Common mistakes when implementing a Virtual Try-On Feature for Products in FlutterFlow

Why it's a problem: Trying to implement full-body clothing try-on using Flutter packages

How to avoid: Limit the try-on feature to face-worn accessories — glasses, hats, earrings, face paint, makeup — where ML Kit's 468-point face mesh provides accurate anchor data. For clothing try-on, integrate a third-party service like Zakeke or Perfect Corp via their REST API.

Why it's a problem: Running face detection on every single camera frame without throttling

How to avoid: Implement a simple frame skip counter — process every 3rd or 4th frame for detection, but always render the latest camera frame. This keeps detection responsive while dramatically reducing CPU/GPU load.

Why it's a problem: Loading product overlay images from the network on every frame rebuild

How to avoid: Use CachedNetworkImage for all product overlay images. Pre-cache the selected product image using precacheImage() when the user selects it, before the try-on view opens.

Best practices

  • Always test try-on on physical devices — emulators do not support the camera hardware required for ML Kit face detection.
  • Pre-process all product images to have transparent backgrounds and consistent anchor points (e.g., nose bridge centered at 50% width for glasses) before upload to Firebase Storage.
  • Add a 'No face detected' message overlay when the ML Kit detector returns zero faces, guiding users to position their face correctly.
  • Implement frame throttling to process only every 3-4th camera frame for ML Kit detection, keeping the UI smooth at 60fps while detection runs at 15fps.
  • Respect user privacy — do not send camera frames or face detection results to any server. Process all ML Kit inference on-device.
  • Offer both a live try-on mode and a photo upload mode (select from gallery) for users in low-light conditions or on older devices.
  • Use a minimum face size filter in FaceDetectorOptions to ignore very small detected faces at the edges of frame, which produce inaccurate landmark positions.

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I'm building a virtual try-on feature for glasses in a Flutter app (exported from FlutterFlow). I'm using google_mlkit_face_detection and the camera package. I have the face detection working and can print out landmark positions. How do I correctly calculate the position and size of a glasses PNG overlay using the LEFT_EYE and RIGHT_EYE landmark coordinates, accounting for head tilt using headEulerAngleZ?

FlutterFlow Prompt

In my FlutterFlow exported Flutter project, I have a Custom Widget with a CameraController and ML Kit FaceDetector. The face detection is working. Now I need to position a product PNG image (glasses) over the face using a Stack and Positioned widget. The camera preview is 375x667 points on screen, and the ML Kit coordinates are in the original camera image resolution. How do I scale the landmark coordinates to screen coordinates?

Frequently asked questions

Does virtual try-on work on both iOS and Android in FlutterFlow?

Yes, once you export to Flutter and add the required packages. Google ML Kit supports both iOS (ARMv8 devices, iOS 15+) and Android (API 21+). The camera package works on both platforms. FlutterFlow's Custom Widget system handles both platforms from a single Dart codebase.

How accurate is the face landmark detection for small accessories like earrings?

ML Kit's 468-point face mesh includes ear tip landmarks (LEFT_EAR_TIP, RIGHT_EAR_TIP), but accuracy depends on lighting, face angle, and device camera quality. In good lighting conditions with the face clearly visible, landmark accuracy is within 3-5 pixels. For earrings, the effect works best when the user faces the camera directly — side angles lose landmark accuracy quickly.

Can I implement try-on using just a static photo instead of a live camera feed?

Yes. ML Kit can process a static InputImage from a gallery photo. Use the image_picker package to select a photo, convert it to an InputImage, run face detection, and overlay the product on the resulting image using Flutter's canvas API. This is simpler to implement than live camera try-on and works on all devices including simulators.

How do I handle users who don't give camera permission?

Use the permission_handler package to check camera permission status before initializing CameraController. If denied, show an informational dialog explaining why the feature needs camera access, with a button that opens the device's app settings using openAppSettings(). Never crash or show an error without context.

Can I save the try-on image to the device's photo library?

Yes. Instead of sharing directly, use the image_gallery_saver or gal package to save the captured PNG bytes to the device's camera roll. On iOS you need the NSPhotoLibraryAddUsageDescription permission in Info.plist. On Android, WRITE_EXTERNAL_STORAGE is required for API levels below 29.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.