Skip to main content
RapidDev - Software Development Agency
lovable-integrationsEdge Function Integration

How to Integrate Lovable with Backblaze B2 Cloud Storage

Connect Backblaze B2 to Lovable using its S3-compatible API through a Supabase Edge Function. Store your B2 Application Key ID and Application Key in Lovable's Cloud Secrets, then use the standard AWS S3 SDK with B2's S3-compatible endpoint in the Edge Function. B2 costs $0.006/GB stored versus S3's $0.023/GB — about 4 times cheaper — with the same API surface for file upload, download, and management.

What you'll learn

  • How to create a Backblaze B2 bucket and obtain Application Key credentials
  • How B2's S3-compatible endpoint works and how it differs from native AWS S3
  • How to build a Supabase Edge Function that performs file operations against B2's S3 endpoint
  • How to generate pre-signed URLs from B2 for secure file downloads
  • How to compare B2 versus AWS S3 costs for your specific Lovable app use case
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate15 min read40 minutesStorageMarch 2026RapidDev Engineering Team
TL;DR

Connect Backblaze B2 to Lovable using its S3-compatible API through a Supabase Edge Function. Store your B2 Application Key ID and Application Key in Lovable's Cloud Secrets, then use the standard AWS S3 SDK with B2's S3-compatible endpoint in the Edge Function. B2 costs $0.006/GB stored versus S3's $0.023/GB — about 4 times cheaper — with the same API surface for file upload, download, and management.

Backblaze B2 Integration in Lovable: Budget Object Storage

Backblaze B2 is the cost-focused alternative to AWS S3 for applications where storage and bandwidth costs are a significant consideration. At $0.006/GB stored and $0.01/GB egress (with free egress to Cloudflare partner networks), B2 offers substantial savings for media-heavy apps, backup systems, and large file repositories compared to S3's $0.023/GB and $0.09/GB egress rates.

The key technical advantage of B2 for Lovable integrations is its S3-compatible API endpoint. B2 supports the same request format, authentication method (AWS Signature v4), and SDK libraries as AWS S3. This means the same Edge Function code that works with S3 works with B2 by changing the endpoint URL and credentials. If you already have an S3 integration in another app, migrating to B2 requires changing the endpoint and credentials — not rewriting the code.

Backblaze B2 also offers a 10 GB free tier with no time limit — sufficient for development, testing, and small production apps. This makes it a practical choice for early-stage Lovable projects where minimizing costs during the initial build phase matters, with a clear path to scale at predictable pricing as the app grows.

Integration method

Edge Function Integration

Backblaze B2 integrates with Lovable via a Supabase Edge Function that uses B2's S3-compatible API endpoint. Store B2 Application Key ID and Application Key in Lovable's Cloud Secrets, then use AWS SDK v3 or standard fetch calls with S3-compatible authentication in the Edge Function. The frontend calls the Edge Function — never B2 directly — keeping credentials server-side and eliminating CORS issues.

Prerequisites

  • A Backblaze account at backblaze.com (free to create, 10 GB free storage with no time limit)
  • An active Lovable project with Lovable Cloud/Supabase enabled
  • Access to Lovable's Cloud tab to add secrets
  • Basic understanding of object storage concepts (buckets, objects, keys)

Step-by-step guide

1

Create a B2 bucket and generate Application Key credentials

Log in to your Backblaze account and navigate to the B2 Cloud Storage section from the left sidebar. Click 'Create a Bucket' to set up your storage container. Choose a globally unique bucket name (bucket names must be unique across all B2 users), set the privacy to 'Private' (public buckets allow unauthenticated access — always use private for application storage and control access through your Edge Function), and leave the default settings for file versioning and lifecycle rules unless you have specific requirements. After creating the bucket, note your Bucket Name and the Endpoint URL shown in the bucket details — B2 endpoints follow the format s3.us-west-004.backblazeb2.com where the region number varies based on your bucket's location. You will need this exact endpoint URL for the Edge Function configuration. B2 buckets are assigned to regions at creation and cannot be moved. Next, create Application Key credentials for API access. In the B2 console sidebar, go to 'App Keys' and click 'Add a New Application Key'. Give it a descriptive name (e.g., lovable-app-key), select 'Allow access to Bucket' and choose your specific bucket (this is the least-privilege approach — the key can only access that one bucket). Set the key capabilities you need: readFiles (download), writeFiles (upload), deleteFiles (delete), listAllBucketNames (required for S3-compatible listing). Set a file prefix if you want to further restrict the key to a specific folder path within the bucket. After clicking 'Create New Key', Backblaze shows the Key ID and Application Key values. The Application Key is shown only once — copy both values immediately to a secure location before closing this page. The Key ID is your equivalent to AWS Access Key ID, and the Application Key is your equivalent to AWS Secret Access Key.

Pro tip: Create a separate Application Key for each environment (development, staging, production). This lets you revoke development credentials without affecting production, and scope each key to a different bucket prefix for isolation.

Expected result: You have a private B2 bucket with its S3-compatible endpoint URL noted, and Application Key credentials (Key ID and Application Key) copied and ready to store in Lovable's Cloud Secrets.

2

Store B2 credentials in Lovable Cloud Secrets

B2 credentials must never appear in source code — they belong exclusively in Lovable's Cloud Secrets panel. Open your Lovable project and click the '+' button next to the Preview area to open the Cloud tab. Navigate to Secrets and add the following entries. Add B2_KEY_ID with the value of your Application Key ID (the shorter identifier that looks like 001abc123def). Add B2_APPLICATION_KEY with the Application Key value (the longer random string). Add B2_BUCKET_NAME with your bucket name (e.g., my-app-uploads). Add B2_ENDPOINT with the S3-compatible endpoint URL for your bucket region (e.g., https://s3.us-west-004.backblazeb2.com). Add B2_REGION with just the region identifier extracted from the endpoint (e.g., us-west-004 — required for AWS signature computation). To add each secret: click the '+' or 'Add secret' button, type the key name exactly as shown above (the Edge Function code references these exact names via Deno.env.get()), paste the value, and save. Key names are case-sensitive. Having the endpoint and region as separate secrets rather than hardcoding them in the Edge Function source code is good practice — if you ever need to move the bucket to a different B2 region (which requires creating a new bucket), you only update the secrets rather than changing and redeploying the function code. Lovable's platform holds SOC 2 Type II certification and encrypts all secrets at rest.

Lovable Prompt

I've added my Backblaze B2 credentials to Cloud Secrets: B2_KEY_ID, B2_APPLICATION_KEY, B2_BUCKET_NAME, B2_ENDPOINT, and B2_REGION are all set. Please create a Supabase Edge Function called 'backblaze-storage' that uses these to provide file upload, listing, and download functionality using B2's S3-compatible API.

Paste this in Lovable chat

Pro tip: Double-check that B2_REGION matches exactly the region in your B2_ENDPOINT. For example, if the endpoint is s3.us-west-004.backblazeb2.com, the region is us-west-004. A mismatch causes AWS signature validation failures that are difficult to debug.

Expected result: Five secrets are stored in Cloud Secrets: B2_KEY_ID, B2_APPLICATION_KEY, B2_BUCKET_NAME, B2_ENDPOINT, and B2_REGION. No credential values appear in any source file.

3

Build the B2 S3-compatible Edge Function

Backblaze B2's S3-compatible API uses the same AWS Signature Version 4 authentication and the same request format as AWS S3, but pointed at B2's endpoint instead of amazonaws.com. In a Deno Edge Function, you can either use the AWS SDK v3 (which Deno supports via npm: specifier) with B2's endpoint configured, or manually construct the signed requests. Using the AWS SDK v3 with B2's endpoint is the most reliable approach — the SDK handles signature computation, retry logic, and error parsing. In Deno, import from npm: prefix: import { S3Client, PutObjectCommand, GetObjectCommand } from 'npm:@aws-sdk/client-s3'. The S3Client is configured with the B2 endpoint, region, and credentials from environment variables. All standard S3 SDK operations (PutObject for upload, ListObjects for listing, DeleteObject for delete, GetObject for download) work against B2 without modification. For generating pre-signed download URLs (the primary way to give users temporary access to private B2 files), use the @aws-sdk/s3-request-presigner package with the same S3Client configured for B2. The getSignedUrl function generates a time-limited URL that allows direct browser download without going through the Edge Function for each download — important for large files where streaming through the Edge Function would hit size limits. The Edge Function handles the same action-based dispatch pattern as other storage integrations: list, upload, get-download-url, and delete actions, each making the appropriate S3-compatible API call against B2.

Lovable Prompt

Create a Supabase Edge Function at supabase/functions/backblaze-storage/index.ts using B2's S3-compatible API. Use npm:@aws-sdk/client-s3 and npm:@aws-sdk/s3-request-presigner to handle the S3 operations. The function should read B2_KEY_ID, B2_APPLICATION_KEY, B2_BUCKET_NAME, B2_ENDPOINT, and B2_REGION from environment variables and configure an S3Client pointed at the B2 endpoint. Handle actions: 'list' (list files with optional prefix), 'upload' (upload a file from base64), 'get-download-url' (generate a pre-signed URL valid for 1 hour), 'delete' (delete a file by key). Include CORS headers and error handling.

Paste this in Lovable chat

supabase/functions/backblaze-storage/index.ts
1import { serve } from 'https://deno.land/std@0.168.0/http/server.ts'
2import { S3Client, PutObjectCommand, ListObjectsV2Command, DeleteObjectCommand } from 'npm:@aws-sdk/client-s3'
3import { getSignedUrl } from 'npm:@aws-sdk/s3-request-presigner'
4import { GetObjectCommand } from 'npm:@aws-sdk/client-s3'
5
6const corsHeaders = {
7 'Access-Control-Allow-Origin': '*',
8 'Access-Control-Allow-Headers': 'authorization, x-client-info, apikey, content-type',
9 'Access-Control-Allow-Methods': 'POST, OPTIONS',
10}
11
12serve(async (req) => {
13 if (req.method === 'OPTIONS') {
14 return new Response('ok', { headers: corsHeaders })
15 }
16
17 const endpoint = Deno.env.get('B2_ENDPOINT')
18 const region = Deno.env.get('B2_REGION')
19 const keyId = Deno.env.get('B2_KEY_ID')
20 const appKey = Deno.env.get('B2_APPLICATION_KEY')
21 const bucket = Deno.env.get('B2_BUCKET_NAME')
22
23 if (!endpoint || !region || !keyId || !appKey || !bucket) {
24 return new Response(
25 JSON.stringify({ error: 'Missing B2 environment variables' }),
26 { status: 500, headers: { ...corsHeaders, 'Content-Type': 'application/json' } }
27 )
28 }
29
30 const s3Client = new S3Client({
31 endpoint,
32 region,
33 credentials: { accessKeyId: keyId, secretAccessKey: appKey },
34 forcePathStyle: true, // Required for B2 S3-compatible API
35 })
36
37 try {
38 const { action, key, prefix, content, contentType } = await req.json()
39
40 if (action === 'list') {
41 const cmd = new ListObjectsV2Command({ Bucket: bucket, Prefix: prefix || '' })
42 const result = await s3Client.send(cmd)
43 return new Response(
44 JSON.stringify({ files: result.Contents ?? [], prefixes: result.CommonPrefixes ?? [] }),
45 { headers: { ...corsHeaders, 'Content-Type': 'application/json' } }
46 )
47
48 } else if (action === 'upload') {
49 const fileBytes = Uint8Array.from(atob(content), c => c.charCodeAt(0))
50 const cmd = new PutObjectCommand({
51 Bucket: bucket,
52 Key: key,
53 Body: fileBytes,
54 ContentType: contentType || 'application/octet-stream',
55 })
56 await s3Client.send(cmd)
57 return new Response(
58 JSON.stringify({ success: true, key }),
59 { headers: { ...corsHeaders, 'Content-Type': 'application/json' } }
60 )
61
62 } else if (action === 'get-download-url') {
63 const cmd = new GetObjectCommand({ Bucket: bucket, Key: key })
64 const url = await getSignedUrl(s3Client, cmd, { expiresIn: 3600 })
65 return new Response(
66 JSON.stringify({ url }),
67 { headers: { ...corsHeaders, 'Content-Type': 'application/json' } }
68 )
69
70 } else if (action === 'delete') {
71 const cmd = new DeleteObjectCommand({ Bucket: bucket, Key: key })
72 await s3Client.send(cmd)
73 return new Response(
74 JSON.stringify({ success: true }),
75 { headers: { ...corsHeaders, 'Content-Type': 'application/json' } }
76 )
77 }
78
79 return new Response(
80 JSON.stringify({ error: `Unknown action: ${action}` }),
81 { status: 400, headers: { ...corsHeaders, 'Content-Type': 'application/json' } }
82 )
83
84 } catch (error) {
85 return new Response(
86 JSON.stringify({ error: error.message }),
87 { status: 500, headers: { ...corsHeaders, 'Content-Type': 'application/json' } }
88 )
89 }
90})

Pro tip: The forcePathStyle: true option is required for B2's S3-compatible API. Without it, the AWS SDK constructs bucket-prefixed hostnames (my-bucket.s3.endpoint.com) that B2 does not support — B2 requires the bucket name in the URL path instead.

Expected result: The backblaze-storage Edge Function is deployed in Lovable Cloud. You can test it with a list action and see your B2 bucket contents returned as JSON. File upload and pre-signed download URL generation work correctly.

4

Build the React file management UI for B2

With the Edge Function deployed, build the React components for file management. The frontend pattern is the same as with other storage providers: use supabase.functions.invoke() to call the backblaze-storage Edge Function with the appropriate action and parameters, then update component state based on the response. For file listings, the component calls the list action with an optional prefix to show files in a virtual folder structure. B2 (like S3) does not have real folders — it uses key prefixes that simulate folder structure. A file at path uploads/user123/photo.jpg appears in a listing of the uploads/user123/ prefix. The list action returns both files (Contents) and common prefixes (virtual folders) for navigation. For file uploads, convert the selected File object to base64 using FileReader.readAsDataURL(), strip the data URL prefix to get raw base64, and send it to the Edge Function with the target key (path) and content type. Organize keys with a structure that includes the user ID to prevent cross-user access issues: uploads/{userId}/{filename} or by category: docs/{userId}/{filename}. For downloads, get a pre-signed URL from the Edge Function's get-download-url action and open it in a new browser tab. Pre-signed URLs from B2 expire in 1 hour by default (configurable). The user downloads the file directly from B2's CDN — not through your Edge Function — which avoids the Edge Function's request size limits. For production apps, store file metadata (key, original filename, size, content type, upload timestamp, user ID) in a Supabase table alongside the B2 object key. This lets you build search, filtering, and user-specific file listings entirely from Supabase queries without calling B2 for metadata operations.

Lovable Prompt

Build a file manager component for Backblaze B2 storage using the backblaze-storage Edge Function. Show: 1) A list of uploaded files with name, size in KB/MB, and upload date. 2) An upload button that accepts any file type, shows upload progress, and calls the Edge Function with the file as base64. 3) A download button for each file that gets a pre-signed URL and opens it. 4) A delete button with confirmation dialog. After each upload or delete, refresh the file list. Store file metadata (key, name, size, type, uploaded_at, user_id) in a Supabase files table so the listing uses Supabase rather than calling B2 directly for metadata.

Paste this in Lovable chat

Pro tip: Always store file metadata in Supabase when uploading to B2. This decouples your file listing queries from B2 API calls, making the list view fast and allowing filtering/search without additional B2 requests. The Supabase metadata row is the source of truth for display; B2 is only queried for actual file access.

Expected result: A working file manager displays in your Lovable app. Files upload to B2 via the Edge Function, metadata is stored in Supabase, the file list renders from Supabase, and downloads use pre-signed B2 URLs. Operations complete without CORS errors or credential exposure.

Common use cases

Store user-uploaded media files at low cost for a content platform

A media-heavy Lovable app (portfolio site, photo gallery, video library, podcast platform) generates significant storage and bandwidth costs. Using B2 instead of S3 reduces storage cost by 75% and eliminates egress costs when using Cloudflare's CDN (B2 and Cloudflare have a bandwidth alliance). Upload user media to B2 and serve it through Cloudflare for near-zero content delivery cost.

Lovable Prompt

Build a media upload component that stores files in Backblaze B2 using the backblaze-storage Edge Function. Users can upload images and videos up to 100 MB. Show an upload progress indicator. After successful upload, store the B2 file path in Supabase and display the file in a gallery grid. Files should be served via their B2 download URL. Add a delete button that removes the file from both B2 and Supabase.

Copy this prompt to try it in Lovable

Build a document storage system with cost-effective long-term retention

Business applications often need to store large volumes of documents — invoices, contracts, reports, attachments — that accumulate over time. B2's low per-GB cost makes it far more economical than S3 for high-volume document archives. Build a document library in Lovable that stores files in B2, maintains metadata in Supabase, and generates expiring download links for secure access.

Lovable Prompt

Create a document storage system using B2. Users can upload PDF, Word, and Excel files. Store file metadata (name, size, upload date, uploader user ID, B2 path) in a Supabase documents table. Show a searchable list of documents filtered by type and date. Each document row has a 'Download' button that generates a pre-signed URL valid for 1 hour. Admins can delete documents, which removes them from both B2 and Supabase.

Copy this prompt to try it in Lovable

Replace Lovable Cloud Storage with B2 for cost savings at scale

As a Lovable app grows, Lovable Cloud Storage costs can increase. Migrating to B2 for large file storage (videos, high-res images, large attachments) while keeping small files and database records in Lovable Cloud reduces the overall storage bill significantly. B2 works as a drop-in replacement since both support S3-compatible APIs.

Lovable Prompt

Migrate the file upload feature in my app from Lovable Cloud Storage to Backblaze B2. Create a new Edge Function called backblaze-storage that handles file uploads and pre-signed download URL generation using B2's S3-compatible API. Update the file upload component to call this Edge Function instead of uploading to Supabase Storage. Keep all file metadata (path, size, mime type) in the existing Supabase files table.

Copy this prompt to try it in Lovable

Troubleshooting

S3 client throws 'SignatureDoesNotMatch' or 'InvalidAccessKeyId' when calling B2

Cause: The B2_KEY_ID or B2_APPLICATION_KEY values are wrong, contain whitespace, or the B2_REGION does not match the actual region of the B2 bucket. AWS Signature v4 authentication is very sensitive to exact credential and region matching.

Solution: Go to Cloud → Secrets and verify all five B2 secrets are set correctly. Delete and re-add B2_KEY_ID and B2_APPLICATION_KEY to eliminate whitespace. Verify B2_REGION matches exactly the region identifier in B2_ENDPOINT (e.g., if endpoint is s3.us-west-004.backblazeb2.com, region must be us-west-004). If still failing, generate new Application Key credentials in the B2 console — the old key may have been revoked.

File list returns empty even though files exist in the B2 bucket

Cause: The list action uses a Prefix parameter. If the prefix does not match the actual path prefix of stored files, no objects are returned. Also occurs if the Application Key was scoped to a specific path prefix that does not match the files' actual paths.

Solution: Call the list action with prefix: '' (empty string) to list all objects at the bucket root. If files are stored under a path like uploads/, use prefix: 'uploads/'. Check the Application Key's path prefix restriction in the B2 console — if the key was created with a restrictive prefix, it can only list objects at that prefix or deeper.

Pre-signed download URLs generated by the Edge Function return 403 Access Denied

Cause: Pre-signed URLs for private B2 buckets use the S3-compatible signing method which works slightly differently from B2's native pre-signing. The forcePathStyle: true flag in the S3Client configuration is required. Also occurs when the Application Key lacks the readFiles permission.

Solution: Verify the S3Client is configured with forcePathStyle: true. Check the Application Key in the B2 console has the readFiles capability. Test the pre-signed URL generation in isolation by calling the get-download-url action and attempting to access the returned URL directly. If URLs work in testing but fail in production, check whether the signed URL contains the correct B2 endpoint hostname.

Best practices

  • Use forcePathStyle: true in the S3Client configuration — B2 requires path-style URLs, not virtual hosted-style URLs that the AWS SDK uses by default.
  • Store file metadata in Supabase alongside B2 object keys so file listing, search, and filtering use fast Supabase queries rather than B2 API calls.
  • Scope Application Keys to specific bucket prefixes per environment (uploads/dev/, uploads/prod/) rather than the entire bucket root.
  • Implement file size validation in the React component before encoding to base64 — reject files over 4 MB and route larger files through a chunked upload approach.
  • Use B2's lifecycle rules to automatically delete old files or move them to Backblaze's even cheaper cold storage tier for archive use cases.
  • Pair B2 with Cloudflare as a CDN — Backblaze and Cloudflare have a bandwidth alliance that eliminates egress fees for traffic routed through Cloudflare, making B2 effectively zero-egress-cost for CDN-served content.
  • Generate fresh pre-signed download URLs on demand rather than caching them — the 1-hour expiry exists for security reasons and caching URLs defeats their purpose.

Alternatives

Frequently asked questions

How much cheaper is Backblaze B2 compared to AWS S3?

B2 charges $0.006/GB stored per month versus S3's $0.023/GB — about 74% cheaper. For egress, B2 charges $0.01/GB versus S3's $0.09/GB, and B2 egress to Cloudflare CDN is free. For a Lovable app storing 100 GB with 50 GB monthly downloads via Cloudflare, B2 costs about $0.60/month (100 GB × $0.006) versus S3's roughly $6.80 (100 GB × $0.023 + 50 GB × $0.09). The savings grow proportionally with scale.

Is Backblaze B2 reliable enough for production apps?

Yes. Backblaze is an established cloud storage company with over a decade of operation. B2 offers 99.9% uptime SLA and 11 nines of durability for stored objects (same as S3). The S3-compatible API has been in production since 2018. Many companies use B2 for production workloads specifically because of its pricing. For Lovable apps, B2 is a solid choice for any file storage use case.

Can I migrate existing files from Lovable Cloud Storage to B2?

Yes, but it requires a migration script. You would list all files in Supabase Storage, download each file, and re-upload it to B2 via the Edge Function, then update your Supabase metadata records with the new B2 keys. This is a one-time operation. After migration, update the file upload component to call the B2 Edge Function instead of Supabase Storage. RapidDev can help with migration planning for large file sets.

Does B2 support the same features as AWS S3 for Lovable integrations?

B2's S3-compatible API supports the most common S3 operations: PutObject, GetObject, DeleteObject, ListObjectsV2, CopyObject, and pre-signed URLs. It does not support some advanced S3 features like S3 Select, S3 Transfer Acceleration, Object Lock, or native S3 event notifications. For Lovable apps doing file upload, download, listing, and deletion, B2's feature set is fully sufficient.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.