Skip to main content
RapidDev - Software Development Agency
bubble-tutorial

How to use Bubble.io to run backend data processing jobs: Step-by-Step Guide

Bubble's backend workflows let you process large datasets in the background without blocking the user interface. Using Schedule API Workflow on a List, you can iterate through thousands of records, perform calculations or updates, and avoid timeout errors by chunking work into manageable batches. This tutorial covers setting up batch processing, recursive workflows, and monitoring job progress.

What you'll learn

  • How to create backend API workflows for data processing
  • How to use Schedule API Workflow on a List for batch operations
  • How to implement recursive workflows with termination conditions
  • How to monitor job progress and handle errors
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner6 min read20-25 minAll Bubble plans (Growth plan+ for backend workflows)March 2026RapidDev Engineering Team
TL;DR

Bubble's backend workflows let you process large datasets in the background without blocking the user interface. Using Schedule API Workflow on a List, you can iterate through thousands of records, perform calculations or updates, and avoid timeout errors by chunking work into manageable batches. This tutorial covers setting up batch processing, recursive workflows, and monitoring job progress.

Overview: Running Backend Data Processing Jobs in Bubble

When you need to process hundreds or thousands of database records — recalculating scores, sending bulk emails, cleaning data, or generating reports — you cannot do it in a frontend workflow without timing out. This tutorial shows how to use Bubble's backend workflows to process data in the background, chunk large operations into batches, and track progress.

Prerequisites

  • A Bubble app with data that needs batch processing
  • Understanding of backend workflows (Settings → API tab)
  • Familiarity with Data Types and Workflows
  • Growth plan or higher for scheduled backend workflows

Step-by-step guide

1

Enable and create a backend workflow

Go to Settings → API tab and check Enable Workflow API. Then go to the Workflow tab, click the pages dropdown, and select Backend workflows. Click New API Workflow. Name it process-single-record. Add a parameter called record_id of type text (or the specific data type you are processing). Inside the workflow, add actions to process a single record: for example, Make changes to a thing with calculated field updates, or call an external API for each record.

Pro tip: Design backend workflows to process ONE record at a time. Use Schedule API Workflow on a List to call this workflow for each record in the batch.

Expected result: A backend workflow exists that can process a single record when called with its ID.

2

Set up batch processing with Schedule API Workflow on a List

Create a frontend workflow trigger (a button or scheduled event). In this workflow, do a search for the records you want to process (e.g., Do a search for Users where needs_update = yes). Add the action Schedule API Workflow on a List. Select your process-single-record workflow. Set the list to your search results. Map the record_id parameter to the current item. Bubble will queue one workflow execution per item in the list, processing them sequentially in the background.

Expected result: A batch job is triggered that schedules individual processing for each record in the list.

3

Implement chunking for very large datasets

For datasets larger than 10,000 records, process in chunks to avoid memory issues. Create a backend workflow called process-chunk with a parameter offset (number). Search for records with a :items from offset :items until offset+100 constraint. Process this chunk using Schedule API Workflow on a List. At the end, check if there are more records: if Do a search count > offset + 100, schedule process-chunk again with offset = offset + 100. Add a termination condition: if the search returns empty results, do not reschedule.

Expected result: Large datasets are processed in manageable chunks of 100 records each.

4

Add progress tracking

Create a ProcessingJob data type with fields: name (text), total_records (number), processed_count (number), status (text: Queued, Running, Completed, Failed), started_at (date), completed_at (date), error_message (text). Before starting the batch, create a ProcessingJob record with the total count. In your process-single-record workflow, increment the processed_count by 1 (Make changes to ProcessingJob). Display progress on an admin page: processed_count / total_records * 100 as a percentage bar.

Expected result: Job progress is tracked in real time and visible on an admin dashboard.

5

Handle errors and implement retry logic

In your process-single-record workflow, use the Include errors in response option in API Connector calls. If an action fails, log the error by creating an ErrorLog record with the record ID, error message, and timestamp. For retries, create a separate backend workflow called retry-failed that searches for ErrorLog records and re-processes them. Schedule this to run after the main batch completes. For complex data processing pipelines with error recovery and monitoring, RapidDev can help design robust backend architectures.

Expected result: Errors are logged rather than stopping the entire batch, and failed records can be retried.

Complete working example

Workflow summary
1BACKEND DATA PROCESSING WORKFLOW SUMMARY
2==========================================
3
4DATA TYPES:
5 ProcessingJob: name, total_records, processed_count,
6 status, started_at, completed_at, error_message
7 ErrorLog: job (ProcessingJob), record_id, error_message,
8 timestamp, is_retried
9
10BACKEND WORKFLOW: process-single-record
11 Parameters: record_id (text), job_id (text)
12 1. Find the record by ID
13 2. Perform processing (calculations, API calls, updates)
14 3. Increment ProcessingJob's processed_count +1
15 4. On error: Create ErrorLog, continue
16
17FRONTEND TRIGGER:
18 1. Create ProcessingJob (status=Running, total=count)
19 2. Search for records to process
20 3. Schedule API Workflow on a List
21 Workflow: process-single-record
22 List: search results
23 Map: record_id = Current item's unique id
24 4. Display progress bar on admin page
25
26CHUNKED PROCESSING:
27 Backend: process-chunk (offset parameter)
28 1. Search records :items from offset :items until offset+100
29 2. Schedule process-single-record for each
30 3. If more records exist: Schedule process-chunk (offset+100)
31 4. If no more records: Update job status=Completed
32
33RETRY LOGIC:
34 Backend: retry-failed
35 1. Search ErrorLog where is_retried = no
36 2. For each: re-run process-single-record
37 3. Update ErrorLog is_retried = yes

Common mistakes when using Bubble.io to run backend data processing jobs: Step-by-Step Guide

Why it's a problem: Processing all records in a single frontend workflow

How to avoid: Always use backend workflows for batch processing. They run server-side and are not affected by browser timeouts.

Why it's a problem: Not adding a termination condition to recursive workflows

How to avoid: Always check if there are remaining records before scheduling the next iteration. Stop when the search returns empty results.

Why it's a problem: Searching the full dataset in every chunk iteration

How to avoid: Use offset-based pagination or mark records as processed (add a is_processed flag) so each chunk only handles new records.

Best practices

  • Design backend workflows to process one record at a time for reliability
  • Use Schedule API Workflow on a List for batch processing
  • Chunk large datasets (10,000+) into batches of 100-500 records
  • Always include a termination condition in recursive workflows
  • Track job progress with a ProcessingJob data type
  • Log errors instead of stopping the entire batch
  • Implement retry logic for failed records
  • Monitor workload unit consumption for batch jobs in Settings → Metrics

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I need to process 50,000 database records in my Bubble.io app — recalculating a field on each one. How do I set up backend batch processing with chunking, progress tracking, and error handling without timing out?

Bubble Prompt

Help me create a backend data processing system. I need to iterate through thousands of records, update a calculated field on each, track progress with a percentage bar, and handle errors with retry logic. Use Schedule API Workflow on a List.

Frequently asked questions

How many records can Schedule API Workflow on a List handle?

It works reliably with lists up to about 10,000 items. For larger lists, use chunking with offset-based pagination to process in batches.

How fast does Bubble process backend workflows?

Bubble processes approximately 100 database rows per second. A batch of 10,000 records takes roughly 100 seconds. Performance varies based on workflow complexity.

Will batch processing consume workload units?

Yes. Each workflow execution and database operation costs workload units. A batch processing 10,000 records might use 5,000-20,000 WUs depending on complexity. Monitor usage in Settings → Metrics.

Can I cancel a running batch job?

There is no built-in cancel button. As a workaround, add a check at the start of each process-single-record workflow: if the ProcessingJob's status is Cancelled, skip processing. Then set the status to Cancelled from the admin page.

Can I run multiple batch jobs simultaneously?

Yes, but be cautious. Multiple simultaneous jobs compete for the same workload unit allocation and may slow each other down. Process one large job at a time for best performance.

Can RapidDev help with complex data processing?

Yes. RapidDev can help design efficient data processing pipelines, optimize batch operations for workload units, and implement advanced patterns like parallel processing and external compute offloading.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.