Run A/B tests in your Bubble app by randomly assigning users to test variants, tracking conversion events per variant, and comparing results to determine which version performs better. This tutorial covers building a simple split testing system using custom states, database tracking, and conversion analytics.
Overview: Setting Up A/B Testing in Bubble
A/B testing lets you compare two versions of a page element to see which drives more conversions. This tutorial shows you how to build a split testing system natively in Bubble — no external tools required. You will randomly assign users to variants, show different content, and track which variant leads to more signups, clicks, or purchases.
Prerequisites
- A Bubble account with an app ready to edit
- A specific element or page you want to test (e.g., a headline, button color, or layout)
- Basic understanding of custom states and workflows
- At least 100 expected visitors for meaningful results
Step-by-step guide
Create the experiment data model
Create the experiment data model
Go to the Data tab and create a Data Type called Experiment with fields: name (text), variant_a_label (text), variant_b_label (text), is_active (yes/no), and start_date (date). Create another Data Type called ExperimentAssignment with fields: experiment (Experiment), user (User), variant (text — 'A' or 'B'), converted (yes/no), and assigned_date (date). This tracks which variant each user sees and whether they converted.
Expected result: Two data types ready to manage experiments and track user assignments.
Randomly assign users to variants on page load
Randomly assign users to variants on page load
On the page you want to test, add a Page is loaded workflow. Add a condition: Only when Do a search for ExperimentAssignments where user is Current User and experiment is your test returns empty. This ensures each user is assigned only once. Add a Create new ExperimentAssignment action. Set variant to a random value: use the expression Calculate formula with random number. If the random number is less than 0.5, set variant to A; otherwise set variant to B. Store the result in a custom state called user_variant on the page.
Pro tip: Use a 50/50 split for simple tests. For more advanced tests, adjust the random threshold (e.g., 0.7 for 70/30 split).
Expected result: Each new visitor is randomly assigned to variant A or B, and the assignment is saved permanently.
Show different content based on variant
Show different content based on variant
Create two versions of the element you are testing. For example, two Group elements: Group Variant A with a blue call-to-action button and Group Variant B with a green button. Set conditional visibility: Group Variant A is visible when page's user_variant is A. Group Variant B is visible when page's user_variant is B. Enable Collapse when hidden on both groups so they do not take up empty space.
Expected result: Users assigned to variant A see the blue button; variant B users see the green button.
Track conversion events
Track conversion events
When the user performs the desired action (e.g., clicks the CTA button, completes signup), add a workflow action: Make changes to the ExperimentAssignment where user is Current User and experiment is the active test. Set converted to yes. This records that the user in their assigned variant completed the goal action.
Expected result: Conversions are tracked per user and linked to their assigned variant.
Build a results dashboard
Build a results dashboard
Create an admin page called ab-results. Add text elements showing: Variant A conversions (search for ExperimentAssignments where variant is A and converted is yes, then count), Variant A total (search where variant is A, count), and calculate the rate: conversions divided by total times 100. Repeat for Variant B. Display both rates side by side so you can compare performance.
Pro tip: Wait until each variant has at least 100 assignments before drawing conclusions. Small sample sizes produce unreliable results.
Expected result: A dashboard showing conversion rates for both variants with total counts.
End the experiment and apply the winner
End the experiment and apply the winner
Once you have statistically significant results (typically 200+ assignments per variant), set the experiment's is_active field to no. Remove the variant B group and keep the winning variant as the permanent version. Optionally, delete or archive the ExperimentAssignment records to clean up your database.
Expected result: The winning variant becomes the permanent version and the experiment data is archived.
Complete working example
1A/B TESTING — WORKFLOW SUMMARY2================================34DATA MODEL5 Experiment:6 - name (text)7 - variant_a_label (text)8 - variant_b_label (text)9 - is_active (yes/no)10 - start_date (date)1112 ExperimentAssignment:13 - experiment (Experiment)14 - user (User)15 - variant (text: A or B)16 - converted (yes/no)17 - assigned_date (date)1819WORKFLOW: Assign Variant (Page is loaded)20 Only when: No existing assignment for this user + experiment21 Step 1: Create ExperimentAssignment22 - experiment: current test23 - user: Current User24 - variant: if random < 0.5 then A else B25 - converted: no26 - assigned_date: Current date/time27 Step 2: Set state → user_variant = Result's variant2829WORKFLOW: Load Existing Assignment (Page is loaded)30 Only when: Assignment exists for this user31 Step 1: Set state → user_variant = existing assignment's variant3233UI CONDITIONALS34 Group Variant A: visible when user_variant is A35 Group Variant B: visible when user_variant is B3637WORKFLOW: Track Conversion38 Trigger: CTA button clicked (or goal action)39 Step 1: Make changes to user's ExperimentAssignment40 - converted: yes4142DASHBOARD FORMULAS43 Variant A rate: (A converted count / A total count) * 10044 Variant B rate: (B converted count / B total count) * 100Common mistakes when running A and B testing in Bubble
Why it's a problem: Not persisting variant assignments in the database
How to avoid: Always save the variant assignment to the database and load it on subsequent visits.
Why it's a problem: Drawing conclusions from too few visitors
How to avoid: Wait for at least 100-200 assignments per variant before comparing conversion rates.
Why it's a problem: Running multiple A/B tests on the same page simultaneously
How to avoid: Run one test per page at a time. If you must test multiple elements, use multivariate testing with a single experiment tracking all combinations.
Best practices
- Persist variant assignments in the database so users always see the same version
- Wait for statistical significance (200+ per variant) before choosing a winner
- Test only one variable at a time for clear attribution
- Run tests for at least 1-2 weeks to account for day-of-week variations
- Use the 50/50 split for most tests to reach significance faster
- Archive experiment data after completion to keep the database clean
- Document what you tested and the results for future reference
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I want to A/B test my Bubble.io landing page headline and CTA button. How do I randomly assign visitors to variants, show different content based on assignment, track conversions, and compare results?
Create an A/B testing system for my landing page. Add Experiment and ExperimentAssignment data types. On page load, randomly assign users to variant A or B. Show different headline text based on variant. Track when users click the signup button as a conversion.
Frequently asked questions
Can I A/B test without requiring user login?
For logged-out users, store the variant in a browser cookie using the Toolbox plugin's JavaScript action. However, cookies reset if the user clears their browser, so logged-in testing is more reliable.
How do I know when my test has enough data?
A rough rule: each variant needs at least 100 conversions for reliable results. Use an online significance calculator to check if the difference between variants is statistically significant.
Can I test more than two variants?
Yes. Instead of A/B, create A/B/C tests by dividing the random number into thirds (0-0.33 = A, 0.33-0.66 = B, 0.66-1.0 = C). This requires more total traffic for significance.
Should I use an external tool like Google Optimize instead?
Google Optimize was sunset in 2023. Building A/B tests natively in Bubble gives you more control and avoids external dependencies. For advanced statistical analysis, export your data to a spreadsheet.
How do I handle users who visit on multiple devices?
If users log in, their assignment follows them across devices via the database. For logged-out users, each device gets an independent assignment.
Can RapidDev help set up advanced experimentation in my Bubble app?
Yes. RapidDev can build sophisticated testing frameworks including multivariate tests, feature flags, gradual rollouts, and statistical significance calculators.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation