Create a feature flag system using a Firestore feature_flags collection with toggle state, rollout percentage, and variant definitions. On app load, a Custom Action fetches all flags, assigns each user to a consistent variant using a hash of userId plus flagName, and stores the active flags in App State. UI elements use Conditional Visibility to show or hide features based on flag state. Track impressions and conversions in Firestore to measure A/B test results. An admin panel lets you toggle flags, adjust rollout percentages, and view conversion rates without redeploying the app.
Building Feature Flags and A/B Testing in FlutterFlow
Feature flags let you enable or disable features remotely without redeploying your app. Combined with A/B testing, they let you show different UI variants to different users and measure which performs better. This tutorial builds a complete system with Firestore-backed flags, consistent variant assignment, conversion tracking, and an admin dashboard.
Prerequisites
- A FlutterFlow project with Firestore and Firebase Authentication configured
- Basic familiarity with FlutterFlow App State, Custom Actions, and Conditional Visibility
- At least two test user accounts for verifying variant assignment
Step-by-step guide
Set up the Firestore data model for feature flags and analytics
Set up the Firestore data model for feature flags and analytics
Create a feature_flags collection with fields: flagName (String, unique), isEnabled (Boolean), rolloutPercentage (Integer, 0-100), variants (Array of Maps: [{name: 'A', weight: 50}, {name: 'B', weight: 50}]), targetAudience (String: 'all', 'new_users', 'premium'), description (String), createdAt (Timestamp). Create a flag_impressions collection: userId (String), flagName (String), variant (String), timestamp (Timestamp). Create a flag_conversions collection: userId (String), flagName (String), variant (String), conversionType (String), timestamp (Timestamp). Add App State activeFlags as a JSON map storing {flagName: variantName} for the current user.
Expected result: Firestore has feature_flags, flag_impressions, and flag_conversions collections ready for the feature flag system.
Create the Custom Action for deterministic variant assignment
Create the Custom Action for deterministic variant assignment
Write a Custom Action loadFeatureFlags that runs on app startup. It queries all feature_flags documents where isEnabled == true. For each flag, it checks rolloutPercentage: hash the userId to get a number 0-99 and compare against rolloutPercentage. If the user is in the rollout, assign a variant using consistent hashing: compute a hash of (userId + flagName), mod by 100, then map to a variant based on cumulative weights. Store the result in App State activeFlags map. The key insight is CONSISTENT hashing: the same userId + flagName always produces the same variant, so users never flip between A and B across sessions.
1// Custom Action: loadFeatureFlags2import 'dart:convert';3import 'package:crypto/crypto.dart';45int _consistentHash(String userId, String flagName) {6 final bytes = utf8.encode('$userId:$flagName');7 final digest = md5.convert(bytes);8 return digest.bytes.fold(0, (a, b) => a + b) % 100;9}1011String _assignVariant(12 String userId, String flagName,13 List<Map<String, dynamic>> variants,14) {15 final hash = _consistentHash(userId, flagName);16 int cumulative = 0;17 for (final v in variants) {18 cumulative += (v['weight'] as int);19 if (hash < cumulative) return v['name'] as String;20 }21 return variants.last['name'] as String;22}2324// Called on app startup, returns Map<String, String>25// Usage: for each enabled flag, call _assignVariant26// Store result in App State activeFlagsExpected result: On app load, each user is deterministically assigned to feature flag variants stored in App State.
Gate UI features with Conditional Visibility based on flags
Gate UI features with Conditional Visibility based on flags
Wherever you want to show a feature-flagged element, wrap it in a Container with Conditional Visibility. The condition checks App State activeFlags. For a simple on/off flag: Conditional Visibility where activeFlags contains the flagName key. For A/B variants: create two Containers side by side. Container A has Conditional Visibility: activeFlags[flagName] == 'A'. Container B has Conditional Visibility: activeFlags[flagName] == 'B'. Only one will be visible. For example, to A/B test a new checkout button: variant A shows the original button, variant B shows a redesigned button with different text and color.
Expected result: Features appear or hide based on the user's flag assignment, with A/B variants showing different UI to different users.
Track impressions and conversions for A/B test measurement
Track impressions and conversions for A/B test measurement
When a feature-flagged element becomes visible, log an impression. Add an On Page Load action that creates a flag_impressions document with userId, flagName, variant (read from App State activeFlags), and timestamp. For conversions, when the user completes the desired action (e.g., clicks the checkout button, completes a purchase), create a flag_conversions document with the same fields plus conversionType. Use a Custom Action that checks if an impression has already been logged today to avoid duplicate counts. The conversion rate for each variant is: count(conversions for variant) / count(impressions for variant).
Expected result: Impressions and conversions are tracked per variant, enabling calculation of A/B test conversion rates.
Build the admin panel for managing flags and viewing results
Build the admin panel for managing flags and viewing results
Create an AdminFlagsPage (gated behind admin role check). Add a ListView with Backend Query on feature_flags ordered by createdAt descending. Each row shows: flagName Text, description Text, isEnabled Switch, rolloutPercentage Slider (0-100), and variant weight display. On Switch toggle, update the isEnabled field. On Slider change, update rolloutPercentage. Add a 'View Results' Button per flag that navigates to a FlagResultsPage showing: total impressions per variant, total conversions per variant, conversion rate percentage per variant, and a winner indicator (highest conversion rate highlighted green). Add a 'Create Flag' FloatingActionButton that opens a form for creating new flags.
Expected result: An admin dashboard for toggling flags, adjusting rollout, and viewing A/B test conversion results.
Complete working example
1FIRESTORE DATA MODEL:2 feature_flags/{flagName}3 flagName: String4 isEnabled: Boolean5 rolloutPercentage: Integer (0-100)6 variants: [{name: "A", weight: 50}, {name: "B", weight: 50}]7 targetAudience: "all" | "new_users" | "premium"8 description: String9 createdAt: Timestamp1011 flag_impressions/{docId}12 userId: String13 flagName: String14 variant: String15 timestamp: Timestamp1617 flag_conversions/{docId}18 userId: String19 flagName: String20 variant: String21 conversionType: String22 timestamp: Timestamp2324APP STATE:25 activeFlags: JSON Map {flagName: variantName}26 loggedImpressions: Set<String> (session-local)2728APP STARTUP FLOW:29 1. Custom Action: loadFeatureFlags()30 - Query feature_flags where isEnabled == true31 - For each flag:32 a. Hash userId → rollout check (0-99 < rolloutPercentage?)33 b. If in rollout: hash(userId + flagName) → assign variant34 c. Store in App State activeFlags35 2. Continue to home page3637UI GATING:38 Container (Variant A content)39 Conditional Visibility: activeFlags['new_checkout'] == 'A'40 Container (Variant B content)41 Conditional Visibility: activeFlags['new_checkout'] == 'B'4243ANALYTICS LOGGING:44 On Page Load (if flagged element visible):45 → Create flag_impressions doc (once per session per flag)46 On Conversion Action:47 → Create flag_conversions doc4849ADMIN PANEL:50 ListView (feature_flags)51 ├── Text (flagName + description)52 ├── Switch (isEnabled)53 ├── Slider (rolloutPercentage)54 └── Button ("View Results")55 → FlagResultsPage56 impressions per variant57 conversions per variant58 conversion rate = conversions / impressionsCommon mistakes when creating a Feature Flag System in FlutterFlow for A/B Testing
Why it's a problem: Using random variant assignment on every app open instead of consistent hashing
How to avoid: Use a deterministic hash of (userId + flagName) so each user always sees the same variant for a given flag. The hash is computed fresh each time but always produces the same result for the same inputs.
Why it's a problem: Gating features only in the UI without checking flags in backend operations
How to avoid: For flags that affect backend behavior, pass the variant from App State to Cloud Functions and check it server-side as well.
Why it's a problem: Not tracking impressions alongside conversions
How to avoid: Log an impression document every time a flagged element is displayed, and use conversion rate (conversions / impressions) as the primary metric.
Best practices
- Use consistent hashing (userId + flagName) for deterministic variant assignment across sessions
- Store flag configuration in Firestore for remote management without app redeployment
- Track both impressions and conversions to calculate meaningful conversion rates
- Log only one impression per user per flag per session to avoid inflating counts
- Start with a low rollout percentage (10-20%) and increase gradually to limit blast radius
- Gate the admin panel behind role-based access control
- Clean up completed A/B tests by archiving flag data and removing conditional UI code
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I want to build a feature flag system with A/B testing in FlutterFlow. Show me the Firestore data model for feature flags with variants and rollout percentages, a Custom Action for consistent variant assignment using hashing, how to use Conditional Visibility for feature gating, impression and conversion tracking, and an admin panel for managing flags.
Create an admin page with a list of feature flags. Each row has a toggle switch, a slider for rollout percentage, and a button to view results. Add a floating action button to create new flags.
Frequently asked questions
How long should I run an A/B test before declaring a winner?
Run until you have at least 100 conversions per variant for statistical significance. For low-traffic apps, this might take 2-4 weeks. Use a sample size calculator to determine the exact number based on your expected conversion rate and desired confidence level.
Can I target flags to specific user segments?
Yes. The targetAudience field supports segment-based targeting. In the loadFeatureFlags Custom Action, check the user's attributes (signup date for new_users, subscription tier for premium) and only assign the flag if the user matches the target audience.
What happens when I disable a flag mid-test?
Users who load the app after the flag is disabled will not see either variant. Existing sessions with cached App State will continue showing their assigned variant until they restart the app. Analytics already collected remain valid.
Can I run multiple A/B tests simultaneously?
Yes. Each flag is independent. A user can be in variant A for one flag and variant B for another. The consistent hashing is per-flag, so variant assignments do not interfere with each other.
How do I avoid the overhead of querying all flags on every app load?
Cache the flag configuration in App State with a timestamp. On app load, check if the cache is less than 1 hour old. If so, use the cached flags. Otherwise, re-query Firestore. This reduces reads while keeping flags reasonably fresh.
Can RapidDev help implement advanced experimentation?
Yes. RapidDev can build a full experimentation platform with multivariate testing, statistical significance calculations, automatic winner selection, gradual rollout, and integration with analytics tools like Mixpanel or Amplitude.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation