Skip to main content
RapidDev - Software Development Agency
flutterflow-tutorials

How to Integrate AI-Based Predictive Analytics in FlutterFlow

Integrate AI-based predictive analytics in FlutterFlow by connecting to a hosted ML model endpoint (Google Vertex AI, AWS SageMaker, or a custom REST API) via a Cloud Function. The Cloud Function sends feature data to the model and returns predictions with confidence scores. FlutterFlow displays the predictions in dashboards with charts and confidence indicators. Never expose ML model endpoints directly from client code — always proxy through a Cloud Function.

What you'll learn

  • How to call a hosted ML model endpoint from a Firebase Cloud Function
  • How to structure prediction requests with feature data from Firestore
  • How to display predictions with confidence scores and trend charts in FlutterFlow
  • How to schedule periodic predictions and store results in Firestore for dashboard display
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner10 min read60-90 minFlutterFlow Free+March 2026RapidDev Engineering Team
TL;DR

Integrate AI-based predictive analytics in FlutterFlow by connecting to a hosted ML model endpoint (Google Vertex AI, AWS SageMaker, or a custom REST API) via a Cloud Function. The Cloud Function sends feature data to the model and returns predictions with confidence scores. FlutterFlow displays the predictions in dashboards with charts and confidence indicators. Never expose ML model endpoints directly from client code — always proxy through a Cloud Function.

Connect trained ML models to your FlutterFlow app for business forecasts and predictions

Predictive analytics adds a layer of intelligence to your FlutterFlow app — forecasting next month's revenue, scoring users by likelihood to churn, or predicting demand for inventory management. The ML model itself is trained and hosted outside FlutterFlow (Google Vertex AI, AWS SageMaker, or a simple FastAPI server). FlutterFlow connects to it via a Cloud Function that prepares the feature data, calls the model endpoint, and returns the prediction. A scheduled Cloud Function runs predictions on a regular basis and stores results in Firestore, so the app dashboard shows current predictions instantly without waiting for an ML call on every page load.

Prerequisites

  • A FlutterFlow project with Firebase configured (Settings → Project Setup → Firebase)
  • A hosted ML model endpoint — this tutorial uses Google Vertex AI Prediction, but the pattern applies to any REST endpoint
  • Cloud Functions enabled on Firebase (Blaze plan required)
  • Historical data in Firestore that will serve as input features for predictions

Step-by-step guide

1

Identify your prediction use case and required input features

Predictive analytics only works if you have historical data to train on and a clear target to predict. Common use cases for FlutterFlow apps: churn prediction (input features: days_since_last_open, total_purchases, subscription_tier, support_tickets; output: churn_probability 0-1), demand forecasting (input: historical_sales_by_day, day_of_week, is_holiday; output: predicted_units_next_week), content recommendation score (input: user_category_preferences, recency, click_history; output: relevance_score per content item). Decide which use case fits your app, identify the features available in your Firestore data, and choose a pre-trained model or train a simple model using Google AutoML or Vertex AI AutoML Tabular which requires no ML expertise — just upload a CSV of historical data.

Expected result: You have a defined prediction goal, know which Firestore fields to use as features, and have a model endpoint URL to call.

2

Create the prediction Cloud Function

In Firebase Console → Functions, create an HTTPS Cloud Function named getPrediction. It accepts a userId (to look up feature data from Firestore) and a predictionType (e.g., 'churn'). The function: reads the user's feature data from Firestore (last_active, purchase_count, etc.), formats it as the model's expected input JSON, calls the Vertex AI prediction endpoint with the service account Bearer token (use Google Auth Library for token generation), and returns the prediction result. For Vertex AI, the endpoint is: https://{region}-aiplatform.googleapis.com/v1/projects/{project}/locations/{region}/endpoints/{endpoint}:predict. The request body is { instances: [{ feature1: value, feature2: value }] }. The response contains predictions and deployed_model_id.

get_prediction.js
1// Cloud Function: getPrediction
2const functions = require('firebase-functions');
3const admin = require('firebase-admin');
4const { GoogleAuth } = require('google-auth-library');
5const axios = require('axios');
6admin.initializeApp();
7
8const ENDPOINT = functions.config().vertex.endpoint_url;
9
10exports.getPrediction = functions.https.onRequest(async (req, res) => {
11 res.set('Access-Control-Allow-Origin', '*');
12 if (req.method === 'OPTIONS') return res.status(204).send('');
13
14 const { userId } = req.query;
15 if (!userId) return res.status(400).json({ error: 'userId required' });
16
17 const userDoc = await admin.firestore().collection('users').doc(userId).get();
18 if (!userDoc.exists) return res.status(404).json({ error: 'User not found' });
19
20 const d = userDoc.data();
21 const features = {
22 days_since_last_open: d.daysSinceLastOpen || 0,
23 total_purchases: d.totalPurchases || 0,
24 subscription_tier: d.subscriptionTier === 'pro' ? 1 : 0,
25 support_tickets: d.supportTickets || 0,
26 };
27
28 const auth = new GoogleAuth({ scopes: 'https://www.googleapis.com/auth/cloud-platform' });
29 const client = await auth.getClient();
30 const token = await client.getAccessToken();
31
32 const { data } = await axios.post(ENDPOINT, { instances: [features] }, {
33 headers: { Authorization: `Bearer ${token.token}` },
34 });
35
36 const score = data.predictions[0][0];
37 res.json({ churnProbability: score, confidence: score > 0.5 ? 'high' : 'low' });
38});

Expected result: The Cloud Function returns a prediction score for the given user along with a confidence indicator.

3

Set up a scheduled prediction run and store results in Firestore

Real-time predictions on every page load are slow and expensive. Instead, run predictions on a schedule and store results. Create a Cloud Scheduler trigger for your prediction function to run daily at midnight: schedule: every 24 hours. The scheduled function queries all users where lastPredictionRun is older than 24 hours, calls the prediction endpoint for each, and writes the result to their Firestore user document: churnScore (Double), churnLabel (low/medium/high), predictionUpdatedAt (Timestamp). For users with churnScore > 0.7, also write to a high_risk_users collection so admins can take action. This batch approach means prediction data is always fresh without on-demand latency.

Expected result: All users have a churnScore field updated daily. High-risk users appear in a separate collection for targeted retention actions.

4

Build the prediction dashboard in FlutterFlow

Create an Analytics page. Add a Backend Query loading the current user's document — their churnScore and prediction fields are available. Add a Container at the top showing a risk indicator: Conditional Value on background color (green if churnScore < 0.3, orange if 0.3-0.7, red if > 0.7). Add a Text showing the score as a percentage: 'Churn Risk: 73%'. Below, add a Custom Widget using fl_chart showing a line chart of churnScore over time (query the predictions_history subcollection). Add a Text below the chart explaining what factors contribute to the score — bind this to a factors field you populate in the Cloud Function alongside the score. For admins: add a ListView showing all high-risk users from the high_risk_users collection.

Expected result: The analytics dashboard shows the current churn risk score with color coding, a trend chart over time, and contributing factors. Admins see a list of high-risk users.

5

Display predictions with confidence scores and actionable thresholds

Raw probability numbers like 0.7234 are meaningless to most users. Convert predictions to actionable labels and confidence indicators. Create a Custom Function named categorizePrediction that takes a Double score and returns a Map with label (Low Risk / Medium Risk / High Risk), color (green/orange/red), and recommendation (e.g., 'Send a retention discount coupon' or 'Schedule a check-in call'). Bind the label and recommendation to Text widgets on the prediction card. Also show the confidence interval if your model returns it (most Vertex AI models include a lower_bound and upper_bound) — display as a range: 'Churn Risk: 65%-78%'. If the confidence interval is wide (> 20 percentage points), show a note that the prediction has low confidence due to limited data for this user.

Expected result: Predictions are displayed with clear labels, color coding, actionable recommendations, and confidence context rather than raw decimal numbers.

Complete working example

prediction_dashboard_schema.txt
1Firestore Schema:
2users/{userId}
3 churnScore: Double (0.0-1.0)
4 churnLabel: String (low / medium / high)
5 churnFactors: Map
6 topFactor: String
7 factorDetails: String
8 predictionUpdatedAt: Timestamp
9
10 predictions_history (subcollection)
11 {predictionId}
12 score: Double
13 label: String
14 predictionType: String
15 recordedAt: Timestamp
16
17high_risk_users/{userId}
18 userId: String
19 churnScore: Double
20 churnLabel: String
21 lastActive: Timestamp
22 markedAt: Timestamp
23
24FlutterFlow Page: AnalyticsDashboard
25 Backend Query: users/{currentUserId}
26
27 Prediction Card
28 Container (bg: Conditional on churnScore)
29 Text: 'Churn Risk: ' + (churnScore * 100).toStringAsFixed(0) + '%'
30 Text: churnLabel (Low/Medium/High Risk)
31 Text: churnFactors.topFactor
32
33 Custom Widget: fl_chart LineChart
34 Backend Query: predictions_history ordered by recordedAt
35
36 Recommendation Card
37 Text: getRecommendation(churnScore)
38
39Custom Function: getRecommendation
40 if score < 0.3: 'User is engaged. No action needed.'
41 if score < 0.7: 'Send a re-engagement email with a discount.'
42 if score >= 0.7: 'High churn risk. Schedule a personal outreach.'

Common mistakes

Why it's a problem: Showing raw ML confidence scores like 0.7234 without context

How to avoid: Convert scores to human-readable labels with thresholds: below 30% = Low Risk (green), 30-70% = Medium Risk (orange), above 70% = High Risk (red). Include a one-sentence recommendation for each tier.

Why it's a problem: Calling the ML model endpoint on every page load for every user

How to avoid: Run predictions in a scheduled Cloud Function (nightly or hourly for high-frequency apps), write results to Firestore, and read from Firestore in FlutterFlow. The dashboard loads in milliseconds.

Why it's a problem: Calling the Vertex AI endpoint directly from FlutterFlow without a Cloud Function

How to avoid: Always proxy Vertex AI calls through a Cloud Function. The Cloud Function authenticates with Application Default Credentials (automatically available in Cloud Functions) without needing explicit key files.

Best practices

  • Start with Google AutoML Tabular or Vertex AI AutoML — they train models from CSV data without writing ML code, making predictive analytics accessible without a data science team
  • Always display the prediction update timestamp so users know how fresh the data is — 'Last updated 3 hours ago' sets expectations correctly
  • Build a feedback mechanism — let users mark predictions as correct or incorrect to collect labeled data that can retrain the model over time
  • Store prediction history over time (not just the current score) so you can show trend charts and measure whether interventions based on predictions are working
  • Segment predictions by user type, plan, or cohort to identify patterns — aggregate dashboards for admins are as valuable as individual user scores
  • Test your model's accuracy on a holdout set before deploying — a model that is accurate only 55% of the time provides no actionable signal
  • Document the model's features and thresholds in the app's admin view so business users understand what drives the predictions they are acting on

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I am building a FlutterFlow app with AI-based churn prediction. Write a Firebase Cloud Function in Node.js that: (1) accepts a userId parameter, (2) loads the user's feature data from Firestore (days_since_last_open, total_purchases, subscription_tier, support_tickets), (3) authenticates with Google Cloud using the google-auth-library, (4) calls a Vertex AI Prediction endpoint with these features as input, and (5) returns the churn probability score with a human-readable label (low/medium/high) and a one-sentence recommendation. Also write a second scheduled function that runs this for all users daily and updates their Firestore documents.

FlutterFlow Prompt

Create an analytics dashboard page in my FlutterFlow app that reads a user's churnScore (Double 0-1) from their Firestore user document. Show a colored card (green below 30%, orange 30-70%, red above 70%), the score as a percentage, a recommendation text based on the score, and a line chart showing churn score history from a predictions_history subcollection using an fl_chart Custom Widget.

Frequently asked questions

Do I need a data science background to add predictive analytics to my FlutterFlow app?

Not anymore. Google Vertex AI AutoML Tabular lets you upload a CSV of historical data (e.g., user behavior and whether they churned), train a model with one click, and deploy it as a REST endpoint — no ML code required. You just need historical labeled data and a Cloud Function to call the endpoint.

What kind of data do I need to make predictions?

You need historical data with both input features (behaviors, attributes) and known outcomes. For churn prediction: past users' activity patterns plus whether they actually churned. For demand forecasting: past sales by day plus the quantity sold. The more historical records you have (ideally 1,000+), the more accurate the model. Store behavioral data in Firestore from day one even if you do not use it for ML immediately.

How much does Vertex AI prediction cost?

Vertex AI Prediction charges per 1,000 prediction requests. Costs are typically $0.10-$0.50 per 1,000 predictions depending on model size and region. With a daily batch job for 10,000 users, that is roughly $0.50-$5 per day — much cheaper than querying on every page load. Use the Google Cloud pricing calculator for an accurate estimate based on your model type.

Can I use OpenAI instead of Vertex AI for predictions?

OpenAI's GPT models can make predictions from text descriptions (e.g., 'Given this user profile, what is their churn risk?'), but they are not specialized for tabular ML tasks and are far more expensive and slower than a dedicated prediction endpoint. Use OpenAI for natural language tasks and Vertex AI or SageMaker for structured tabular predictions.

How do I show users why the prediction was made (explainability)?

Vertex AI supports feature attributions (Shapley values) that show which features contributed most to a prediction. Enable Explanation when deploying the model endpoint. The prediction response then includes a featureAttributions object. Surface the top 2-3 features in your FlutterFlow UI: 'Main factors: 45 days inactive, 0 purchases last month, 2 support tickets.' This dramatically increases user trust in the prediction.

What if I need help setting up the full ML prediction pipeline for my app?

Connecting Vertex AI to Firestore, scheduling batch predictions, building the dashboard, and ensuring data quality all require careful architecture. RapidDev has implemented AI-powered features across production FlutterFlow apps and can design the complete prediction pipeline from data collection to dashboard display.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.