To upload files to Supabase Storage, create a storage bucket in the Dashboard or via the client, then use supabase.storage.from('bucket').upload() to send files. Set buckets as public or private, configure RLS policies on storage.objects to control who can upload, and use getPublicUrl() or createSignedUrl() to retrieve file URLs. Always validate file type and size before uploading.
Uploading Files to Supabase Storage with Access Control
Supabase Storage is an S3-compatible object storage system integrated with Supabase Auth and Row Level Security. This tutorial covers creating buckets, uploading files from the client, securing uploads with RLS policies, and generating URLs to access your files. You will build a complete file upload flow that works with authenticated users and respects access control rules.
Prerequisites
- A Supabase project (free tier or above)
- The @supabase/supabase-js library installed in your project
- Basic knowledge of JavaScript/TypeScript and file handling
- Supabase Auth configured with at least email/password login
Step-by-step guide
Create a storage bucket in the Dashboard
Create a storage bucket in the Dashboard
Open your Supabase Dashboard and click Storage in the left sidebar. Click New Bucket and enter a name like 'uploads'. Choose whether the bucket should be public (anyone with the URL can read files) or private (requires authentication). For most applications, start with a private bucket and use signed URLs for controlled access. You can also set allowed MIME types and a file size limit at creation time.
Expected result: A new storage bucket appears in the Dashboard Storage section.
Set up RLS policies for upload access
Set up RLS policies for upload access
Once you create a bucket, you need RLS policies on the storage.objects table to control who can upload files. Without policies, all uploads will be denied. The most common pattern is to let authenticated users upload to a folder named after their user ID. This keeps files organized and prevents users from overwriting each other's files.
1-- Allow authenticated users to upload files to their own folder2create policy "Users can upload to own folder"3on storage.objects for insert4to authenticated5with check (6 bucket_id = 'uploads'7 and (storage.foldername(name))[1] = auth.uid()::text8);910-- Allow authenticated users to read their own files11create policy "Users can read own files"12on storage.objects for select13to authenticated14using (15 bucket_id = 'uploads'16 and (storage.foldername(name))[1] = auth.uid()::text17);1819-- Allow authenticated users to delete their own files20create policy "Users can delete own files"21on storage.objects for delete22to authenticated23using (24 bucket_id = 'uploads'25 and (storage.foldername(name))[1] = auth.uid()::text26);Expected result: Authenticated users can upload, read, and delete files only within their own user-ID folder.
Upload a file from the client
Upload a file from the client
Use the Supabase client to upload files. The upload() method takes the file path within the bucket and the File object. The path should include the user's ID as the first folder segment to match the RLS policy. Set cacheControl for browser caching and upsert to control whether existing files should be overwritten.
1import { createClient } from '@supabase/supabase-js'23const supabase = createClient(4 process.env.NEXT_PUBLIC_SUPABASE_URL!,5 process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!6)78async function uploadFile(file: File) {9 const { data: { user } } = await supabase.auth.getUser()10 if (!user) throw new Error('Not authenticated')1112 const filePath = `${user.id}/${Date.now()}-${file.name}`1314 const { data, error } = await supabase.storage15 .from('uploads')16 .upload(filePath, file, {17 cacheControl: '3600',18 upsert: false,19 })2021 if (error) throw error22 return data23}Expected result: The file is uploaded to the uploads bucket under the user's folder and the upload returns the file path.
Retrieve file URLs for display
Retrieve file URLs for display
After uploading, you need a URL to display or share the file. For public buckets, use getPublicUrl() which returns a permanent URL. For private buckets, use createSignedUrl() which generates a temporary URL that expires after a specified number of seconds. Signed URLs are more secure because they automatically expire.
1// For public buckets — permanent URL2const { data } = supabase.storage3 .from('public-bucket')4 .getPublicUrl('user-id/photo.jpg')5console.log(data.publicUrl)67// For private buckets — temporary signed URL (1 hour)8const { data: signedData, error } = await supabase.storage9 .from('uploads')10 .createSignedUrl('user-id/document.pdf', 3600)11console.log(signedData?.signedUrl)Expected result: You get a usable URL that can be embedded in HTML img tags or shared as download links.
Handle upload errors and validate input
Handle upload errors and validate input
Always validate files before uploading: check the file size against your bucket limit, verify the MIME type is allowed, and handle errors from the upload response. Common errors include 413 (file too large), 403 (RLS policy denied), and 409 (file already exists when upsert is false). Provide clear error messages to help users fix the issue.
1const ALLOWED_TYPES = ['image/png', 'image/jpeg', 'image/webp', 'application/pdf']2const MAX_SIZE = 10 * 1024 * 1024 // 10 MB34async function safeUpload(file: File) {5 if (!ALLOWED_TYPES.includes(file.type)) {6 return { error: `File type ${file.type} is not allowed.` }7 }8 if (file.size > MAX_SIZE) {9 return { error: `File is too large. Maximum size is 10 MB.` }10 }1112 const { data: { user } } = await supabase.auth.getUser()13 if (!user) return { error: 'Please sign in to upload files.' }1415 const { data, error } = await supabase.storage16 .from('uploads')17 .upload(`${user.id}/${Date.now()}-${file.name}`, file, {18 cacheControl: '3600',19 upsert: false,20 })2122 if (error) return { error: error.message }23 return { data }24}Expected result: Invalid files are rejected before upload, and server errors are caught and reported clearly.
Complete working example
1import { createClient } from '@supabase/supabase-js'23const supabase = createClient(4 process.env.NEXT_PUBLIC_SUPABASE_URL!,5 process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!6)78const ALLOWED_TYPES = ['image/png', 'image/jpeg', 'image/webp', 'application/pdf']9const MAX_FILE_SIZE = 10 * 1024 * 1024 // 10 MB10const BUCKET = 'uploads'1112interface UploadResult {13 success: boolean14 path?: string15 url?: string16 error?: string17}1819export async function uploadFile(file: File): Promise<UploadResult> {20 // Validate file type21 if (!ALLOWED_TYPES.includes(file.type)) {22 return { success: false, error: `File type ${file.type} is not allowed.` }23 }2425 // Validate file size26 if (file.size > MAX_FILE_SIZE) {27 const sizeMB = (file.size / (1024 * 1024)).toFixed(1)28 return { success: false, error: `File is ${sizeMB} MB. Max is 10 MB.` }29 }3031 // Verify authentication32 const { data: { user } } = await supabase.auth.getUser()33 if (!user) {34 return { success: false, error: 'You must be signed in to upload files.' }35 }3637 // Upload to user-scoped folder38 const filePath = `${user.id}/${Date.now()}-${file.name}`39 const { data, error } = await supabase.storage40 .from(BUCKET)41 .upload(filePath, file, {42 cacheControl: '3600',43 upsert: false,44 })4546 if (error) {47 return { success: false, error: error.message }48 }4950 // Generate a signed URL (1 hour expiry)51 const { data: urlData } = await supabase.storage52 .from(BUCKET)53 .createSignedUrl(data.path, 3600)5455 return {56 success: true,57 path: data.path,58 url: urlData?.signedUrl,59 }60}Common mistakes when uploading Files to Supabase Storage
Why it's a problem: Uploading files without RLS policies, resulting in silent 403 denials
How to avoid: Always create INSERT and SELECT policies on storage.objects for your bucket. Without policies, RLS blocks all operations and returns empty results or 403 errors.
Why it's a problem: Using the same file path for every upload, overwriting previous files
How to avoid: Prepend a timestamp or UUID to each filename to ensure uniqueness: `${Date.now()}-${file.name}`. Set upsert: false to get an error if a path collision occurs.
Why it's a problem: Exposing the service role key in client-side code for storage operations
How to avoid: Always use the anon key on the client side. The anon key respects RLS policies, which is the correct behavior. The service role key bypasses all security and should only be used server-side.
Why it's a problem: Not validating file type and size before upload, wasting bandwidth on rejected files
How to avoid: Check file.type against an allowed list and file.size against your bucket limit before calling storage.upload().
Best practices
- Always create RLS policies on storage.objects before uploading files — without them, all operations are denied
- Use user-scoped folders (userId/filename) to keep files organized and simplify RLS policies
- Validate file type and size on the client before uploading to save bandwidth and provide instant feedback
- Use signed URLs for private files instead of making entire buckets public
- Set cacheControl on uploads to control browser caching behavior for static assets
- Prepend timestamps or UUIDs to filenames to prevent naming conflicts
- Never expose the SUPABASE_SERVICE_ROLE_KEY in client-side code — use the anon key which respects RLS
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I want to upload files to Supabase Storage from a React app. Users should only be able to upload images (PNG, JPEG, WebP) up to 10 MB, and each user should only see their own files. Show me the bucket setup, RLS policies, and upload code.
Create a private storage bucket called uploads with a 10 MB limit in Supabase. Add RLS policies so authenticated users can upload, read, and delete only files in their own user-ID folder. Write a TypeScript upload function with client-side file validation.
Frequently asked questions
What is the maximum file size I can upload to Supabase Storage?
The default limit is 50 MB on the free plan. Pro and Team plans support up to 5 GB per file. You can set a lower custom limit per bucket in the Dashboard.
Do I need RLS policies for public buckets?
Public buckets allow anyone to read files without authentication. However, you still need INSERT policies to control who can upload, and DELETE policies to control who can remove files.
Can I upload files without authenticating the user?
Yes, if you create an RLS policy that allows the anon role to insert into storage.objects. However, this is not recommended for production as it allows anyone to upload files to your bucket.
How do I upload multiple files at once?
The Supabase client uploads one file at a time. To upload multiple files, use Promise.all() to run multiple upload() calls in parallel. Each file needs its own unique path.
Why does my upload return a 403 error?
A 403 error means your RLS policy is blocking the upload. Check that you have an INSERT policy on storage.objects for your bucket and that the authenticated user's ID matches the folder path in the policy.
Can I overwrite an existing file?
Set the upsert option to true in the upload() call to overwrite a file at the same path. When upsert is false (the default), uploading to an existing path returns a 409 conflict error.
Can RapidDev help set up file storage with proper access controls?
Yes, RapidDev can configure your Supabase storage buckets, write RLS policies, build upload components with validation, and set up CDN caching for optimal file delivery.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation