Skip to main content
RapidDev - Software Development Agency
flutterflow-tutorials

How to Implement a Virtual Assistant with AI in FlutterFlow

Build a conversational AI assistant in FlutterFlow by storing chat history in Firestore and sending only the last 10 messages to a Cloud Function that calls an AI API (OpenAI or Anthropic). Add a system prompt that defines the assistant's persona, implement tool use for querying app data (orders, status), and show a typing indicator while the AI responds. Never send the full unbounded history — it exceeds token limits.

What you'll learn

  • Store conversation history in Firestore and send only a context window to the AI API
  • Write a Cloud Function that calls an AI API with system prompt and message history
  • Implement tool use so the assistant can query Firestore for app-specific data
  • Add a typing indicator and suggested prompts to the chat UI in FlutterFlow
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner12 min read50-65 minFlutterFlow Pro+ (Cloud Functions required for AI API calls)March 2026RapidDev Engineering Team
TL;DR

Build a conversational AI assistant in FlutterFlow by storing chat history in Firestore and sending only the last 10 messages to a Cloud Function that calls an AI API (OpenAI or Anthropic). Add a system prompt that defines the assistant's persona, implement tool use for querying app data (orders, status), and show a typing indicator while the AI responds. Never send the full unbounded history — it exceeds token limits.

Why a Windowed Context Approach Is Essential

A common mistake when building AI chat features is saving every message to Firestore and then sending the entire history array to the AI API on each request. A conversation with 100 messages can easily exceed 50,000 tokens — well beyond the context limits of many models and expensive to process. The solution is to send only a sliding window of the last 10-20 messages to the AI, which covers the immediately relevant context for almost all conversational needs. The full history is still stored in Firestore for display in the UI, but the AI only sees the recent window. Combined with a strong system prompt and structured tool use for fetching specific data, this produces a capable assistant without context limit errors.

Prerequisites

  • FlutterFlow project with Firebase Authentication and Firestore enabled
  • An API key for OpenAI or Anthropic Claude (stored as a Firebase Functions secret)
  • FlutterFlow Pro plan for Cloud Functions and API integrations
  • Basic understanding of Firestore collections and Cloud Function deployment

Step-by-step guide

1

Create the Firestore `conversations` and `messages` collections

In FlutterFlow, create a `conversations` collection with fields: `userId` (String), `title` (String), `createdAt` (Timestamp), `lastMessageAt` (Timestamp), `messageCount` (Integer). Create a `messages` subcollection (under each conversation document) with fields: `role` (String — `user`, `assistant`, or `tool`), `content` (String), `toolName` (String, nullable), `toolResult` (String, nullable), `createdAt` (Timestamp). Store messages as a subcollection of conversations so you can query a single conversation's messages with a simple `collectionGroup` query. This structure scales to many conversations per user without performance degradation.

Expected result: Both collections appear in the FlutterFlow Firestore schema panel.

2

Build the chat UI with message list and input bar

Create a page called `AssistantChat`. Add a Column that fills the screen. At the top, add a Text widget showing the conversation title. In the middle, add a ListView that queries the `messages` subcollection for the current conversation, ordered by `createdAt` ascending, with live updates enabled. Each message tile should show a chat bubble — align right and use a primary color background for `role == user` messages, left-align with a grey background for `role == assistant`. At the bottom, add a Row with a TextField (multi-line, sends on Return key), a Send icon button, and a typing indicator row above the input that is conditionally visible. Add Page State variables: `isTyping` (Boolean, default false) and `conversationId` (String).

Expected result: The chat page shows an empty message list and an input bar at the bottom.

3

Write the Cloud Function that calls the AI API with a context window

Create a Firebase Cloud Function named `chatWithAssistant`. It accepts `conversationId`, `userMessage`, and `userId`. The function: fetches the last 10 messages from the `messages` subcollection ordered by `createdAt` descending, reverses the array to get chronological order, prepends a system prompt message, appends the new user message, calls the OpenAI or Anthropic API with the message array, saves the assistant's response to Firestore as a new `messages` document, updates `lastMessageAt` on the conversation, and returns the assistant's text. The key design is fetching only 10 messages — not the full conversation — to stay within token limits.

functions/index.js
1const { onCall, HttpsError } = require('firebase-functions/v2/https');
2const { defineSecret } = require('firebase-functions/params');
3const { getFirestore } = require('firebase-admin/firestore');
4const { initializeApp } = require('firebase-admin/app');
5
6initializeApp();
7const openaiKey = defineSecret('OPENAI_API_KEY');
8
9const SYSTEM_PROMPT = `You are a helpful assistant for our app.
10You can help users check order status, find products, and get support.
11Always be concise and friendly. If you cannot answer, say so.`;
12
13const CONTEXT_WINDOW = 10;
14
15exports.chatWithAssistant = onCall(
16 { secrets: [openaiKey] },
17 async (request) => {
18 const { conversationId, userMessage } = request.data;
19 const uid = request.auth?.uid;
20 if (!uid) throw new HttpsError('unauthenticated', 'Sign in required');
21
22 const db = getFirestore();
23 const messagesRef = db
24 .collection('conversations').doc(conversationId)
25 .collection('messages');
26
27 // Fetch last N messages for context window
28 const recent = await messagesRef
29 .orderBy('createdAt', 'desc')
30 .limit(CONTEXT_WINDOW)
31 .get();
32
33 const history = recent.docs
34 .reverse()
35 .map((d) => ({ role: d.data().role, content: d.data().content }));
36
37 const messages = [
38 { role: 'system', content: SYSTEM_PROMPT },
39 ...history,
40 { role: 'user', content: userMessage },
41 ];
42
43 // Save user message
44 await messagesRef.add({
45 role: 'user', content: userMessage,
46 createdAt: new Date(),
47 });
48
49 // Call OpenAI
50 const response = await fetch('https://api.openai.com/v1/chat/completions', {
51 method: 'POST',
52 headers: {
53 Authorization: `Bearer ${openaiKey.value()}`,
54 'Content-Type': 'application/json',
55 },
56 body: JSON.stringify({ model: 'gpt-4o-mini', messages }),
57 });
58 const data = await response.json();
59 const assistantText = data.choices[0].message.content;
60
61 // Save assistant response
62 await messagesRef.add({
63 role: 'assistant', content: assistantText,
64 createdAt: new Date(),
65 });
66
67 await db.collection('conversations').doc(conversationId)
68 .update({ lastMessageAt: new Date() });
69
70 return { response: assistantText };
71 }
72);

Expected result: Deploying the function succeeds. A test call from the Firebase console returns a valid AI response.

4

Wire the Send button to call the Cloud Function with typing state

In FlutterFlow, select the Send button. Add an Action chain: first, set page state `isTyping` to true (this shows the typing indicator). Second, run the Call Cloud Function action pointing to `chatWithAssistant`, passing `conversationId` from page state and the TextField content as `userMessage`. Third, clear the TextField content. Fourth, set page state `isTyping` to false. The typing indicator (an AnimatedWidget or a Row of three dots) is conditionally visible when `isTyping == true`. Because Firestore messages use live queries, the assistant's reply appears in the list automatically when the Cloud Function writes it — you do not need to manually refresh the list.

Expected result: Clicking Send shows the typing indicator, the user message appears in the list, and after 1-3 seconds the assistant's reply appears automatically.

5

Add suggested prompts that pre-fill the input

Above the input bar, add a horizontal ScrollView containing a Row of Chip widgets. Each chip shows a suggested prompt like `Check my last order`, `How do I reset my password?`, or `What are your support hours?`. Tapping a chip sets the TextField content to the chip's prompt text using a Set Form Field Value action. Optionally, immediately trigger the send flow on tap (chain the Cloud Function call after setting the field). Define the suggested prompts as a constant list in a Custom Function so they can be easily updated without modifying the UI. Show the suggestion chips only when the message list is empty — use Conditional Visibility on the chip row.

Expected result: New conversations show 3-4 suggested prompt chips. Tapping a chip populates the input and the chips disappear after the first message is sent.

6

Implement tool use to query app data from the assistant

Tool use lets the assistant answer questions like 'What is the status of my order?' by fetching real data from Firestore. In the Cloud Function, define a `tools` array with schemas for functions like `getOrderStatus` (parameter: `orderId`) and `getUserAccount` (no parameters). Pass `tools` to the OpenAI API call. After the model responds, check if `response.choices[0].finish_reason == 'tool_calls'`. If so, extract the tool name and parameters, execute the corresponding Firestore query in the Cloud Function, append the tool result as a `tool` role message, make a second API call with the result included, and return the final response. This keeps all data access server-side so the AI cannot expose data to the wrong user.

Expected result: Asking 'What is the status of order 123?' returns a response that includes actual data from Firestore, not a generic 'I cannot access order data' reply.

Complete working example

functions/index.js
1const { onCall, HttpsError } = require('firebase-functions/v2/https');
2const { defineSecret } = require('firebase-functions/params');
3const { initializeApp } = require('firebase-admin/app');
4const { getFirestore } = require('firebase-admin/firestore');
5
6initializeApp();
7const openaiKey = defineSecret('OPENAI_API_KEY');
8
9const SYSTEM_PROMPT =
10 'You are a helpful customer support assistant. ' +
11 'Be concise, friendly, and accurate. ' +
12 'Use the getOrderStatus tool when users ask about orders.';
13
14const TOOLS = [
15 {
16 type: 'function',
17 function: {
18 name: 'getOrderStatus',
19 description: 'Get the status of a specific order by ID',
20 parameters: {
21 type: 'object',
22 properties: { orderId: { type: 'string', description: 'The order ID' } },
23 required: ['orderId'],
24 },
25 },
26 },
27];
28
29async function executeToolCall(toolName, toolArgs, userId) {
30 const db = getFirestore();
31 if (toolName === 'getOrderStatus') {
32 const orderDoc = await db.collection('orders').doc(toolArgs.orderId).get();
33 if (!orderDoc.exists || orderDoc.data().userId !== userId) {
34 return 'Order not found or access denied.';
35 }
36 const { status, estimatedDelivery, items } = orderDoc.data();
37 return JSON.stringify({ status, estimatedDelivery, itemCount: items.length });
38 }
39 return 'Tool not found.';
40}
41
42exports.chatWithAssistant = onCall(
43 { secrets: [openaiKey] },
44 async (request) => {
45 const { conversationId, userMessage } = request.data;
46 const uid = request.auth?.uid;
47 if (!uid) throw new HttpsError('unauthenticated', 'Sign in required');
48
49 const db = getFirestore();
50 const msgsRef = db
51 .collection('conversations').doc(conversationId)
52 .collection('messages');
53
54 // Save user message
55 await msgsRef.add({ role: 'user', content: userMessage, createdAt: new Date() });
56
57 // Fetch last 10 messages as context window
58 const recent = await msgsRef
59 .orderBy('createdAt', 'desc').limit(10).get();
60 const history = recent.docs.reverse()
61 .map((d) => ({ role: d.data().role, content: d.data().content }));
62
63 const messages = [{ role: 'system', content: SYSTEM_PROMPT }, ...history];
64
65 // First API call
66 let response = await fetch('https://api.openai.com/v1/chat/completions', {
67 method: 'POST',
68 headers: {
69 Authorization: `Bearer ${openaiKey.value()}`,
70 'Content-Type': 'application/json',
71 },
72 body: JSON.stringify({ model: 'gpt-4o-mini', messages, tools: TOOLS }),
73 });
74 let data = await response.json();
75 let choice = data.choices[0];
76
77 // Handle tool calls
78 if (choice.finish_reason === 'tool_calls') {
79 const toolCall = choice.message.tool_calls[0];
80 const toolResult = await executeToolCall(
81 toolCall.function.name,
82 JSON.parse(toolCall.function.arguments),
83 uid
84 );
85 messages.push(choice.message);
86 messages.push({
87 role: 'tool',
88 tool_call_id: toolCall.id,
89 content: toolResult,
90 });
91 // Second API call with tool result
92 response = await fetch('https://api.openai.com/v1/chat/completions', {
93 method: 'POST',
94 headers: {
95 Authorization: `Bearer ${openaiKey.value()}`,
96 'Content-Type': 'application/json',
97 },
98 body: JSON.stringify({ model: 'gpt-4o-mini', messages }),
99 });
100 data = await response.json();
101 choice = data.choices[0];
102 }
103
104 const assistantText = choice.message.content;
105 await msgsRef.add({ role: 'assistant', content: assistantText, createdAt: new Date() });
106 await db.collection('conversations').doc(conversationId)
107 .update({ lastMessageAt: new Date() });
108
109 return { response: assistantText };
110 }
111);

Common mistakes when implementing a Virtual Assistant with AI in FlutterFlow

Why it's a problem: Sending the entire conversation history (100+ messages) to the AI API on every request

How to avoid: Fetch only the last 10 messages from Firestore to use as the context window. Store the full history in Firestore for display, but send only the recent slice to the AI. Increase the window to 20 only if you need longer conversational memory.

Why it's a problem: Putting the AI API key directly in FlutterFlow as an API call with the key in headers

How to avoid: Always call AI APIs from a Cloud Function. Store the key as a Firebase Function Secret. The Cloud Function is the only place the key ever appears.

Why it's a problem: Not saving the assistant's response to Firestore before returning it to the client

How to avoid: Always write the assistant response to the `messages` subcollection in the Cloud Function before returning the text to FlutterFlow.

Best practices

  • Limit the context window to 10-20 messages per API call — store full history in Firestore but only send the recent slice to the AI.
  • Store AI API keys exclusively in Firebase Function Secrets, never in FlutterFlow API Manager headers or Firestore documents.
  • Write a specific, detailed system prompt that defines the assistant's persona, capabilities, and limitations — vague system prompts produce inconsistent behavior.
  • Implement tool use server-side in the Cloud Function with ownership verification — never trust the AI to enforce data access rules.
  • Save both user messages and assistant responses to Firestore before returning to the client so full history persists across sessions.
  • Add a `conversationId` to each new session and let users start fresh conversations — this prevents very old context from confusing the assistant.
  • Track token usage from the API response and store it in Firestore to monitor per-user costs and enforce usage limits.

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I am building an AI assistant Cloud Function in Firebase (Node.js v2). It calls the OpenAI Chat API with a sliding window of the last 10 messages from a Firestore subcollection. I need to implement tool use where the assistant can call a `getOrderStatus` function that queries Firestore. The tool should only return order data if the requesting user is the order owner. Write the complete Cloud Function including the tool definition, tool execution, and the second API call with the tool result.

FlutterFlow Prompt

In FlutterFlow, I have a chat page with a TextField and Send button. When Send is tapped, I need to: set a page state `isTyping` to true, call a Firebase Cloud Function passing the TextField value and a conversationId from page state, then set `isTyping` back to false when the function returns. The message list refreshes automatically via a live Firestore query. How do I build this action chain so the typing indicator shows during the Cloud Function call?

Frequently asked questions

Which AI model should I use — OpenAI GPT-4o or Anthropic Claude?

For a customer support assistant, GPT-4o-mini is a good default — it is fast, cheap, and handles tool use well. Anthropic Claude Haiku is a similar cost tier. Use GPT-4o or Claude Sonnet if you need better reasoning for complex queries. Start with the mini/haiku tier and upgrade only if response quality is insufficient.

How do I handle the case where the AI response takes more than 30 seconds?

Firebase Cloud Functions v2 have a default timeout of 60 seconds (configurable up to 3600). Most AI API calls complete in 2-10 seconds. If you hit timeouts, use streaming responses — the Cloud Function returns a stream and FlutterFlow reads it via an HTTP streaming API call. This is more complex to implement but gives users a word-by-word response instead of waiting for the full text.

Can the assistant remember context across multiple conversations?

Within a single conversation, yes — the sliding window provides recent context. Across conversations, you need to implement a summary mechanism: after each conversation ends, use the AI to generate a short summary and store it in a `userMemory` Firestore document. Include the user's memory summary in the system prompt of new conversations.

How much does this cost to run per month?

With GPT-4o-mini at roughly $0.15 per million input tokens and $0.60 per million output tokens, a 10-message context window of average length (200 tokens/message) costs about $0.0003 per assistant reply. At 1,000 replies per day that is about $9/month in AI costs plus Firebase Cloud Function costs, which are typically under $5/month at that scale.

What if I want the assistant to not answer questions outside its scope?

Include explicit refusal instructions in the system prompt: 'If asked about topics unrelated to our app (e.g., politics, personal advice, code generation), politely decline and redirect the user to relevant app features.' The AI will follow these instructions consistently. You can also add a content moderation API call before sending to the AI to filter clearly inappropriate inputs.

How do I test the assistant without spending real API credits?

Use the OpenAI API in test mode with mock responses for development. Alternatively, route to a local Ollama instance (llama3.2 model) during development by making the API base URL configurable via environment variable. Switch to the real OpenAI endpoint only in the production Cloud Function environment.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.