Skip to main content
RapidDev - Software Development Agency
supabase-tutorial

How to Handle 403 Forbidden Errors in Supabase Storage

A 403 Forbidden error in Supabase Storage means your request was blocked by either a missing or incorrect RLS policy on the storage.objects table, a private bucket without proper authentication, or a mismatch between the bucket name in your policy and your code. Fix it by enabling RLS on storage.objects and adding the correct policy for your operation.

What you'll learn

  • Why Supabase Storage returns 403 errors and how to diagnose the cause
  • How to write RLS policies on storage.objects for upload, download, and delete
  • How to configure public vs private buckets correctly
  • How to test storage permissions using the Supabase Dashboard
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate8 min read15-20 minSupabase (all plans), @supabase/supabase-js v2+March 2026RapidDev Engineering Team
TL;DR

A 403 Forbidden error in Supabase Storage means your request was blocked by either a missing or incorrect RLS policy on the storage.objects table, a private bucket without proper authentication, or a mismatch between the bucket name in your policy and your code. Fix it by enabling RLS on storage.objects and adding the correct policy for your operation.

Diagnosing and Fixing 403 Forbidden Errors in Supabase Storage

Supabase Storage uses Row Level Security on the storage.objects table to control who can upload, download, and delete files. When RLS is enabled but no matching policy exists for your operation, Supabase silently blocks the request with a 403 Forbidden error. This tutorial walks you through every common cause of 403 errors and shows you how to write the correct RLS policies to fix them.

Prerequisites

  • A Supabase project with at least one storage bucket created
  • Basic understanding of SQL and Supabase Dashboard navigation
  • The Supabase JS client installed in your project (@supabase/supabase-js v2+)
  • Familiarity with Row Level Security concepts

Step-by-step guide

1

Identify which operation is failing

Open your browser's developer tools and check the Network tab when the 403 error occurs. Look at the request URL and HTTP method. A POST to /storage/v1/object means an upload is failing. A GET to /storage/v1/object means a download or URL generation is failing. A DELETE means a file removal is blocked. Note the bucket name and file path from the URL — you will need these to write the correct policy. Also check whether the request includes an Authorization header with a valid JWT token. If the header is missing, your Supabase client is not authenticated.

Expected result: You know which operation (upload, download, or delete) is being blocked, the bucket name, and whether the user is authenticated.

2

Check your bucket's public or private setting

In the Supabase Dashboard, go to Storage and click on your bucket. Check whether it is set to Public or Private. Public buckets allow anyone to read files without authentication — but uploads and deletes still require RLS policies. Private buckets require RLS policies for ALL operations including reads. If you intended the bucket to be publicly readable, toggle it to Public in the bucket settings. If it should remain private, you need SELECT policies on storage.objects for downloads.

typescript
1-- Check bucket settings via SQL
2select id, name, public from storage.buckets;

Expected result: You have confirmed whether your bucket is public or private, and you understand which operations require RLS policies.

3

Write RLS policies for storage.objects

Open the SQL Editor in your Supabase Dashboard and create the appropriate policies. The storage.objects table has columns including bucket_id (the bucket name), name (the full file path), and owner (the user ID who uploaded the file). Use auth.uid() to scope access to the authenticated user. The storage.foldername(name) function extracts path segments, which is useful for user-scoped folder patterns where files are stored under the user's ID.

typescript
1-- Allow authenticated users to upload files to their own folder
2create policy "Users can upload to own folder"
3on storage.objects for insert
4to authenticated
5with check (
6 bucket_id = 'my-bucket'
7 and auth.uid()::text = (storage.foldername(name))[1]
8);
9
10-- Allow authenticated users to read their own files
11create policy "Users can read own files"
12on storage.objects for select
13to authenticated
14using (
15 bucket_id = 'my-bucket'
16 and auth.uid()::text = (storage.foldername(name))[1]
17);
18
19-- Allow authenticated users to delete their own files
20create policy "Users can delete own files"
21on storage.objects for delete
22to authenticated
23using (
24 bucket_id = 'my-bucket'
25 and auth.uid()::text = (storage.foldername(name))[1]
26);

Expected result: RLS policies are created for INSERT, SELECT, and DELETE operations scoped to the authenticated user's folder.

4

Write a public read policy if files should be accessible to everyone

If your use case requires public read access without user authentication (for example, a public image gallery), you can create a SELECT policy that grants access to both the anon and authenticated roles. This is an alternative to making the entire bucket public and gives you finer control — you can make some paths public and others private within the same bucket.

typescript
1-- Allow anyone to read files in the 'public' folder
2create policy "Public read access for public folder"
3on storage.objects for select
4to anon, authenticated
5using (
6 bucket_id = 'my-bucket'
7 and (storage.foldername(name))[1] = 'public'
8);

Expected result: Anyone can read files stored in the public/ folder of your bucket, while other folders remain protected.

5

Test the fix with your Supabase client

After creating the policies, test each operation that was failing. Make sure you are signed in as an authenticated user when testing operations that target the authenticated role. Use the Supabase JS client to upload a file, retrieve its URL, and delete it. Check the browser console and Network tab to confirm you no longer receive 403 errors.

typescript
1import { createClient } from '@supabase/supabase-js'
2
3const supabase = createClient(
4 process.env.NEXT_PUBLIC_SUPABASE_URL!,
5 process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
6)
7
8// Test upload (user must be signed in)
9const { data, error } = await supabase.storage
10 .from('my-bucket')
11 .upload(`${userId}/test.txt`, new Blob(['hello']), {
12 contentType: 'text/plain',
13 })
14
15if (error) console.error('Upload failed:', error.message)
16else console.log('Upload succeeded:', data.path)

Expected result: File upload, download, and delete operations succeed without 403 errors.

6

Verify policies in the Dashboard

Go to Authentication > Policies in the Supabase Dashboard and filter for the storage.objects table. You should see all the policies you created. Verify that each policy targets the correct operation (INSERT, SELECT, DELETE), the correct roles (authenticated, anon), and references the correct bucket_id. If you have conflicting policies or policies with typos in the bucket name, they will silently fail to match and the 403 error will persist.

typescript
1-- List all policies on storage.objects
2select policyname, cmd, roles, qual, with_check
3from pg_policies
4where tablename = 'objects' and schemaname = 'storage';

Expected result: All storage policies are visible in the Dashboard and correctly configured for your bucket and operations.

Complete working example

storage-policies.sql
1-- ============================================
2-- Supabase Storage RLS Policies
3-- Replace 'my-bucket' with your bucket name
4-- ============================================
5
6-- 1. Allow authenticated users to upload to their own folder
7create policy "Users can upload to own folder"
8on storage.objects for insert
9to authenticated
10with check (
11 bucket_id = 'my-bucket'
12 and auth.uid()::text = (storage.foldername(name))[1]
13);
14
15-- 2. Allow authenticated users to read their own files
16create policy "Users can read own files"
17on storage.objects for select
18to authenticated
19using (
20 bucket_id = 'my-bucket'
21 and auth.uid()::text = (storage.foldername(name))[1]
22);
23
24-- 3. Allow authenticated users to update their own files
25create policy "Users can update own files"
26on storage.objects for update
27to authenticated
28using (
29 bucket_id = 'my-bucket'
30 and auth.uid()::text = (storage.foldername(name))[1]
31);
32
33-- 4. Allow authenticated users to delete their own files
34create policy "Users can delete own files"
35on storage.objects for delete
36to authenticated
37using (
38 bucket_id = 'my-bucket'
39 and auth.uid()::text = (storage.foldername(name))[1]
40);
41
42-- 5. Optional: Public read access for a specific folder
43create policy "Public read for public folder"
44on storage.objects for select
45to anon, authenticated
46using (
47 bucket_id = 'my-bucket'
48 and (storage.foldername(name))[1] = 'public'
49);

Common mistakes when handling 403 Forbidden Errors in Supabase Storage

Why it's a problem: Writing an INSERT policy with a USING clause instead of WITH CHECK

How to avoid: INSERT policies use WITH CHECK (not USING) to validate new rows. USING is for SELECT, UPDATE, and DELETE. Use WITH CHECK for INSERT policies.

Why it's a problem: Misspelling the bucket name in the policy so it never matches

How to avoid: The bucket_id in your policy must exactly match the bucket name (case-sensitive). Run SELECT id, name FROM storage.buckets to confirm the exact name.

Why it's a problem: Forgetting that UPDATE requires a matching SELECT policy to work

How to avoid: PostgreSQL requires a SELECT policy to read the existing row before updating it. Always create both SELECT and UPDATE policies together.

Why it's a problem: Using the service role key in client-side code to bypass RLS instead of fixing the policy

How to avoid: The service role key bypasses all RLS and must NEVER be exposed in browser code. Fix the RLS policy instead of using the service role key as a workaround.

Best practices

  • Always use the user-scoped folder pattern (user_id/filename) to isolate files per user
  • Create separate policies for each operation (INSERT, SELECT, UPDATE, DELETE) for clarity
  • Test storage policies with a real authenticated user, not just the service role key
  • Use the SQL Editor to inspect existing policies before adding new ones to avoid conflicts
  • Keep buckets private by default and only make them public when the use case truly requires it
  • Add the bucket_id condition to every storage policy to prevent cross-bucket access
  • Log storage errors in your application to quickly diagnose 403 issues in production
  • Review storage policies after schema changes or bucket renames to prevent breakage

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I am getting a 403 Forbidden error when trying to upload files to Supabase Storage. My bucket is called 'avatars' and I want authenticated users to upload, read, and delete files in their own folder. Write me the SQL RLS policies for storage.objects and explain what each one does.

Supabase Prompt

Create RLS policies on storage.objects for a private bucket called 'documents' that allows authenticated users to upload files to their own folder (using their user ID as the folder name), read only their own files, and delete only their own files. Include a public read policy for a 'shared' folder.

Frequently asked questions

Why do I get a 403 error even though my bucket is set to public?

Public buckets only allow unauthenticated read access. Uploads, updates, and deletes still require RLS policies on storage.objects. You need INSERT, UPDATE, and DELETE policies even on public buckets.

How do I check which RLS policies exist on storage.objects?

Run this SQL in the SQL Editor: SELECT policyname, cmd, roles FROM pg_policies WHERE tablename = 'objects' AND schemaname = 'storage'. You can also view them in the Dashboard under Authentication > Policies.

Can I use the service role key to bypass storage RLS?

Yes, the service role key bypasses all RLS, but it must only be used in server-side code like Edge Functions. Never expose the service role key in client-side JavaScript — it gives full unrestricted access to all data.

What does the storage.foldername function do?

storage.foldername(name) extracts the folder path segments from a file's full path as an array. For a file at 'user123/avatars/photo.png', it returns {'user123', 'avatars'}. The first element [1] is the top-level folder.

Why does my UPDATE policy not work even though I created it?

PostgreSQL requires a SELECT policy to exist alongside an UPDATE policy. Without a SELECT policy, the database cannot read the existing row to apply the update. Create both SELECT and UPDATE policies for the same conditions.

Can I have different access rules for different folders in the same bucket?

Yes. Use the storage.foldername function in your RLS policies to check the folder path. You can create one policy for a 'public' folder accessible to anyone and another for user-specific folders restricted to the file owner.

Can RapidDev help configure storage permissions for my Supabase project?

Yes, RapidDev can set up your Supabase Storage with proper RLS policies, user-scoped folder structures, and access controls tailored to your application's requirements.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.