Skip to main content
RapidDev - Software Development Agency
bubble-tutorial

How to build AI-powered suggestions in Bubble

Building an AI-powered suggestion system in Bubble involves connecting to the OpenAI or Claude API via the API Connector, sending user context and preferences as prompts, displaying personalized AI recommendations in your app interface, and caching results in your database to reduce API costs and improve response times for repeat queries.

What you'll learn

  • How to connect to OpenAI or Claude API via the API Connector
  • How to send user context for personalized suggestions
  • How to display AI recommendations in your app
  • How to cache AI responses to reduce costs
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner5 min read20-25 minAll Bubble plansMarch 2026RapidDev Engineering Team
TL;DR

Building an AI-powered suggestion system in Bubble involves connecting to the OpenAI or Claude API via the API Connector, sending user context and preferences as prompts, displaying personalized AI recommendations in your app interface, and caching results in your database to reduce API costs and improve response times for repeat queries.

Overview: Building an AI-Powered Suggestion System in Bubble

This tutorial shows you how to add AI-powered personalized suggestions to your Bubble app. You will connect to an LLM API, build prompts with user context, display recommendations, and implement caching for efficiency.

Prerequisites

  • A Bubble app with the API Connector plugin installed
  • An OpenAI API key or Anthropic (Claude) API key
  • User data in your database that provides context for suggestions
  • Basic understanding of the API Connector in Bubble

Step-by-step guide

1

Set up the OpenAI API in the API Connector

Go to Plugins → API Connector → Add another API. Name it 'OpenAI'. Set Authentication to Private key in header. Add a shared header: Authorization = Bearer [your_api_key] (check Private). Add another shared header: Content-Type = application/json. Create a call named 'Get Suggestions' with method POST, URL: https://api.openai.com/v1/chat/completions. Set Use as to Action. Add the request body with model, messages array, and temperature parameters. The messages array should include a system message defining the AI's role and a user message containing the context and request.

OpenAI API request body (JSON)
1{
2 "model": "gpt-4o",
3 "messages": [
4 {
5 "role": "system",
6 "content": "You are a helpful product recommendation assistant. Suggest 3 items based on the user's preferences. Return JSON array with name, reason, and score fields."
7 },
8 {
9 "role": "user",
10 "content": "<user_context_parameter>"
11 }
12 ],
13 "temperature": 0.7,
14 "max_tokens": 500
15}

Expected result: The OpenAI API is configured in the API Connector and ready to receive prompts.

2

Build the user context from your database

Create a workflow that assembles user context before calling the API. Gather relevant data: the user's past purchases, viewed items, saved preferences, or browsing history. Construct a context string like: 'User preferences: [category interests]. Recent purchases: [last 5 items]. Looking for: [current search or page context].' Pass this as the user_context_parameter to the API call. The richer the context, the more relevant the suggestions.

Expected result: A contextual prompt is built from user data and ready to send to the AI API.

3

Display AI suggestions in your app

When the API responds, parse the AI's response text. If you instructed the AI to return JSON, extract the structured data. Display suggestions in a visually appealing card layout — each card showing the suggestion name, the AI's reasoning, and a relevance score. Add a Refresh Suggestions button that calls the API again with updated context. Show a loading indicator while waiting for the API response using a custom state 'is_loading' on the suggestions group.

Pro tip: For complex AI features like this, RapidDev can help architect the full system including prompt engineering, response parsing, caching, and fallback handling.

Expected result: AI-generated suggestions display in attractive cards with reasoning and refresh capability.

4

Cache AI responses to reduce costs

Create a Suggestion Data Type with fields: user (User), context_hash (text — a hash of the context string), suggestions_json (text), created_date (date). Before calling the API, search for an existing Suggestion where user = Current User and context_hash matches and created_date is within the last 24 hours. If found, display the cached result. If not, call the API and save the response as a new Suggestion record. This dramatically reduces API costs for users who revisit the same context.

Expected result: Repeat requests serve cached suggestions without making additional API calls.

5

Add feedback to improve suggestion quality

Add thumbs up and thumbs down buttons on each suggestion card. Store feedback in a SuggestionFeedback Data Type with fields: suggestion (Suggestion), user (User), item_index (number), is_positive (yes/no). Include this feedback history in future prompts: 'User previously liked: [positive items]. User previously disliked: [negative items].' This creates a feedback loop that improves suggestion relevance over time.

Expected result: Users can provide feedback on suggestions, which improves future recommendation quality.

Complete working example

API Connector payload
1{
2 "API Name": "OpenAI",
3 "Authentication": "Private key in header",
4 "Headers": {
5 "Authorization": "Bearer [api_key] (Private)",
6 "Content-Type": "application/json"
7 },
8 "Call: Get Suggestions": {
9 "Method": "POST",
10 "URL": "https://api.openai.com/v1/chat/completions",
11 "Use as": "Action",
12 "Body": {
13 "model": "gpt-4o",
14 "messages": [
15 {"role": "system", "content": "Recommend 3 items as JSON array. Fields: name, reason, score."},
16 {"role": "user", "content": "<user_context>"}
17 ],
18 "temperature": 0.7,
19 "max_tokens": 500
20 }
21 },
22 "Caching Data Type": {
23 "Suggestion": {
24 "user": "User",
25 "context_hash": "text",
26 "suggestions_json": "text",
27 "created_date": "date"
28 }
29 }
30}

Common mistakes when building AI-powered suggestions in Bubble

Why it's a problem: Calling the AI API on every page load without caching

How to avoid: Cache results in your database and serve cached suggestions for identical contexts within 24 hours

Why it's a problem: Sending minimal context in the AI prompt

How to avoid: Include user preferences, past behavior, and current context in the prompt for truly personalized recommendations

Why it's a problem: Not handling API errors or timeouts

How to avoid: Show fallback content (popular items or recent items) when the API fails, and log errors for monitoring

Best practices

  • Cache AI responses in your database to reduce API costs
  • Include rich user context in prompts for better personalization
  • Request structured JSON output from the AI for easier parsing
  • Show loading indicators during API calls (2-10 second wait)
  • Add user feedback mechanisms to improve suggestion quality over time
  • Set max_tokens to limit response length and cost per call

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I want to add AI-powered product recommendations to my Bubble.io e-commerce app. I need to send user purchase history to OpenAI and display personalized suggestions. How do I set up the API Connector, build the prompt, and cache results?

Bubble Prompt

Set up an OpenAI API call in the API Connector that takes a user context string and returns 3 product suggestions as JSON. Display the suggestions in cards with the item name and reasoning. Cache the response in a Suggestion Data Type for 24 hours.

Frequently asked questions

How much does it cost to run AI suggestions?

Using GPT-4o, a typical suggestion request costs $0.01-0.05 per call depending on context length. With caching, you can reduce costs by 80-90% for returning users.

Should I use OpenAI or Claude for suggestions?

Both work well. OpenAI GPT-4o is slightly cheaper for short responses. Claude excels at following structured output instructions. Choose based on your existing API access and cost preferences.

Can I make suggestions without an AI API?

Yes. For simpler recommendations, use Bubble database queries: find items in the same category as the user's recent purchases, or items popular among similar users. AI adds natural language reasoning on top.

How do I ensure the AI returns valid JSON?

Include explicit instructions in the system message: 'Return ONLY a JSON array, no additional text.' Use OpenAI's JSON mode by adding response_format: {type: 'json_object'} to the request body.

Can RapidDev help build an AI suggestion system?

Yes. RapidDev can architect and build complete AI-powered recommendation systems including prompt engineering, response parsing, caching, feedback loops, and fallback handling in your Bubble app.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.