Skip to main content
RapidDev - Software Development Agency
bubble-tutorial

How to deploy machine learning models within a Bubble.io application: Step-by-St

Deploy machine learning models in your Bubble app by calling hosted ML prediction APIs through the API Connector plugin. Connect to services like Hugging Face Inference API, Google Cloud AI, or AWS SageMaker endpoints — send user data to the model, receive predictions, and display results in your app. Bubble handles the frontend while the ML model runs on a specialized cloud service.

What you'll learn

  • How to connect Bubble to a hosted ML model API via the API Connector
  • How to send user input to prediction endpoints and parse results
  • How to display ML predictions in Bubble UI elements
  • How to choose between Hugging Face, Google Cloud AI, and custom endpoints
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner6 min read20-25 minAll Bubble plansMarch 2026RapidDev Engineering Team
TL;DR

Deploy machine learning models in your Bubble app by calling hosted ML prediction APIs through the API Connector plugin. Connect to services like Hugging Face Inference API, Google Cloud AI, or AWS SageMaker endpoints — send user data to the model, receive predictions, and display results in your app. Bubble handles the frontend while the ML model runs on a specialized cloud service.

Deploying ML Models in a Bubble Application

Machine learning can power smart features like sentiment analysis, image classification, product recommendations, and text generation in your app. Since Bubble cannot run ML models directly, you connect to hosted prediction APIs that do the heavy lifting. This tutorial shows you how to use the API Connector to call ML endpoints, send data, and display predictions — no data science background needed.

Prerequisites

  • A Bubble account with an app open in the editor
  • An account with an ML hosting service (Hugging Face free tier works)
  • The API Connector plugin installed
  • Basic understanding of API calls in Bubble

Step-by-step guide

1

Choose and set up an ML hosting service

For the simplest setup, use Hugging Face Inference API (free tier available). Sign up at huggingface.co and generate an API token from Settings → Access Tokens. Hugging Face hosts thousands of pre-trained models for text classification, sentiment analysis, summarization, image recognition, and more. For custom models, you would use Google Cloud AI Platform, AWS SageMaker, or Replicate — each provides API endpoints for hosted models.

Pro tip: Start with Hugging Face's free Inference API for prototyping. When you need production reliability and speed, upgrade to their Inference Endpoints ($0.06/hr for GPU) or move to AWS SageMaker.

Expected result: You have an API token and know which model endpoint you want to call.

2

Configure the ML API in Bubble's API Connector

Go to Plugins → API Connector → Add another API. Name it 'MLService.' Set Authentication to 'Private key in header' with Key name 'Authorization' and Key value 'Bearer YOUR_HUGGING_FACE_TOKEN' (check Private). Add a new call named 'predict_sentiment' with Use as: Action, Method: POST, URL: https://api-inference.huggingface.co/models/distilbert-base-uncased-finetuned-sst-2-english. Add header Content-Type: application/json. In the Body, add a parameter 'inputs' (the text to analyze).

API Connector body
1{
2 "inputs": "<user_text>"
3}

Expected result: The ML API call is configured with authentication and ready to accept dynamic input text.

3

Initialize the call and map the response

Enter a test value for the inputs parameter, such as 'I love this product, it works perfectly!' Click 'Initialize call.' The Hugging Face API returns a JSON response with labels (POSITIVE/NEGATIVE) and confidence scores. Bubble maps these fields automatically. You will see the response structure showing an array of results, each with a 'label' and 'score' field. Save the call after reviewing the mapped response.

Expected result: The API response is mapped, showing sentiment labels and confidence scores that you can reference in Bubble.

4

Build the prediction interface in your app

On your page, add a Multiline Input for users to enter text (e.g., a product review). Add a 'Analyze Sentiment' button. Create a workflow: When button is clicked → Plugins → MLService - predict_sentiment with inputs = Multiline Input's value. Below the input, add a Group to display results. After the API call, display: Result of step 1's first item's label (shows POSITIVE or NEGATIVE) and Result of step 1's first item's score formatted as percentage (shows confidence). Add conditionals to color-code: green background for POSITIVE, red for NEGATIVE.

Expected result: Users enter text, click Analyze, and see the sentiment prediction with confidence score displayed in the app.

5

Add image classification using a different model

Add a second API call named 'classify_image' in the same MLService API. Set Method: POST, URL: https://api-inference.huggingface.co/models/google/vit-base-patch16-224. For image classification, the body is the raw image file (not JSON). Set the body type to 'File' and the parameter to accept an image file. In your app, add a File Uploader element, and when the user uploads an image, call this endpoint with the uploaded file. Display the top classification result and score.

Expected result: Users can upload an image and receive an AI classification with the predicted category and confidence.

6

Cache predictions to reduce API costs and improve speed

Create a 'Prediction' Data Type with fields: input_hash (text — a hash or truncated version of the input), model_name (text), result_label (text), result_score (number), timestamp (date). Before calling the ML API, search for an existing Prediction with matching input_hash. If found and less than 24 hours old, display the cached result. Otherwise, call the API and save a new Prediction record. This prevents redundant API calls for identical inputs.

Expected result: Repeated predictions for the same input use cached results, reducing API costs and response time.

Complete working example

API Connector payload
1{
2 "api_name": "MLService",
3 "authentication": {
4 "type": "Private key in header",
5 "key_name": "Authorization",
6 "key_value": "Bearer [YOUR_HUGGING_FACE_TOKEN]"
7 },
8 "calls": [
9 {
10 "name": "predict_sentiment",
11 "use_as": "Action",
12 "method": "POST",
13 "url": "https://api-inference.huggingface.co/models/distilbert-base-uncased-finetuned-sst-2-english",
14 "headers": {"Content-Type": "application/json"},
15 "body": {"inputs": "[user_text]"}
16 },
17 {
18 "name": "summarize_text",
19 "use_as": "Action",
20 "method": "POST",
21 "url": "https://api-inference.huggingface.co/models/facebook/bart-large-cnn",
22 "headers": {"Content-Type": "application/json"},
23 "body": {"inputs": "[long_text]", "parameters": {"max_length": 130}}
24 },
25 {
26 "name": "classify_image",
27 "use_as": "Action",
28 "method": "POST",
29 "url": "https://api-inference.huggingface.co/models/google/vit-base-patch16-224",
30 "body_type": "File"
31 }
32 ],
33 "cache_data_type": {
34 "name": "Prediction",
35 "fields": {
36 "input_hash": "text",
37 "model_name": "text",
38 "result_label": "text",
39 "result_score": "number",
40 "timestamp": "date"
41 }
42 }
43}

Common mistakes when deploying machine learning models within a Bubble.io application: Step-by-St

Why it's a problem: Calling ML APIs on every page load instead of on user action

How to avoid: Only call ML APIs when the user explicitly clicks a button. Never put ML calls in page load events.

Why it's a problem: Not handling model loading delays on Hugging Face free tier

How to avoid: Show a loading indicator while waiting for the response. Add error handling for timeout. Consider using Hugging Face's dedicated Inference Endpoints for consistent response times.

Why it's a problem: Exposing ML API tokens on the client side

How to avoid: Always check the Private box for API keys in the API Connector. ML APIs often charge per call, so leaked keys can result in unexpected bills.

Best practices

  • Always mark ML API tokens as Private in the API Connector to prevent exposure
  • Cache prediction results in the database to avoid redundant API calls
  • Show loading indicators during ML inference — models can take 1-5 seconds to respond
  • Start with pre-trained models on Hugging Face before investing in custom model training
  • Handle API errors gracefully — show user-friendly messages when the model is unavailable
  • Use appropriate models for each task: sentiment analysis, classification, summarization, etc.

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I want to add AI/ML features to my Bubble.io app. How do I connect to Hugging Face's Inference API via the API Connector to do sentiment analysis and image classification? Show me the API setup, how to send data, and how to display predictions.

Bubble Prompt

Add AI-powered sentiment analysis to my app. Set up the API Connector to call Hugging Face's sentiment model. Let users enter text, analyze it, and display the result (POSITIVE/NEGATIVE with confidence). Cache results to reduce API calls.

Frequently asked questions

Do I need to know machine learning to use ML APIs?

No. Hosted services like Hugging Face provide pre-trained models with simple API endpoints. You just send data and receive predictions — no model training or data science knowledge required.

How much do ML API calls cost?

Hugging Face free tier offers 30,000 characters/month for text models. Paid Inference Endpoints start at $0.06/hour. Google Cloud AI and AWS SageMaker have pay-per-prediction pricing that varies by model complexity.

Can I train my own custom model and connect it to Bubble?

Yes. Train your model using any ML framework, deploy it to Hugging Face, AWS SageMaker, or Google Cloud AI, and connect the endpoint via Bubble's API Connector. The Bubble integration is the same regardless of the model.

What types of ML features can I add to my Bubble app?

Common features include: sentiment analysis (positive/negative text), text summarization, image classification, language translation, chatbots, product recommendations, and fraud detection — all via API calls to hosted models.

Can RapidDev help integrate ML features into my Bubble app?

Yes. RapidDev helps founders integrate AI and ML capabilities into Bubble apps, including model selection, API configuration, caching strategies, and production optimization.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.