Supabase has built-in rate limits on its API gateway, but you can implement custom rate limiting using Edge Functions, PostgreSQL functions, or middleware. For the API, Supabase limits requests based on your plan (free: 500 req/s, Pro: 1000 req/s). For custom limits, create an Edge Function that tracks request counts per IP or user in a rate_limits table and returns 429 errors when exceeded. You can also use pg_net to call rate-limiting services or implement token-bucket algorithms in PostgreSQL.
Implementing API Rate Limiting with Supabase
Rate limiting protects your application from abuse, prevents runaway costs, and ensures fair resource distribution among users. Supabase provides built-in rate limits on its API gateway, but many applications need custom rate limiting logic — per-user limits, per-endpoint limits, or tiered limits based on subscription plans. This tutorial covers both the built-in limits and how to implement custom rate limiting using Edge Functions and PostgreSQL.
Prerequisites
- A Supabase project with the API configured
- The Supabase CLI installed (for Edge Functions)
- Basic understanding of HTTP status codes (especially 429 Too Many Requests)
- Familiarity with Supabase Edge Functions (Deno runtime)
Step-by-step guide
Understand Supabase's built-in rate limits
Understand Supabase's built-in rate limits
Supabase's API gateway (Kong) applies rate limits automatically based on your plan. The free plan allows approximately 500 requests per second, and the Pro plan allows approximately 1000 requests per second. These are per-project limits, not per-user. Auth endpoints have additional limits: the default email rate limit is 2 per hour (without custom SMTP) and signup/login endpoints have their own throttling. You can view current rate limit headers in API responses: X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset.
1// Check rate limit headers in API responses2const response = await fetch(3 `${process.env.NEXT_PUBLIC_SUPABASE_URL}/rest/v1/your_table`,4 {5 headers: {6 'apikey': process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,7 'Authorization': `Bearer ${session.access_token}`,8 },9 }10)1112console.log('Rate limit:', response.headers.get('X-RateLimit-Limit'))13console.log('Remaining:', response.headers.get('X-RateLimit-Remaining'))14console.log('Reset at:', response.headers.get('X-RateLimit-Reset'))Expected result: You understand the built-in limits and can read rate limit information from response headers.
Create a rate-limiting table to track requests
Create a rate-limiting table to track requests
For custom rate limiting, create a PostgreSQL table that records request counts per identifier (user ID, IP address, or API key). Use a sliding window approach where you count requests within a time window (e.g., 100 requests per minute). The table stores the identifier, endpoint, request count, and the window start time. A PostgreSQL function handles the counting and limit checking atomically to prevent race conditions.
1-- Create a rate limiting table2CREATE TABLE public.rate_limits (3 id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,4 identifier text NOT NULL,5 endpoint text NOT NULL,6 request_count integer DEFAULT 1,7 window_start timestamptz DEFAULT now(),8 UNIQUE(identifier, endpoint, window_start)9);1011-- Create index for fast lookups12CREATE INDEX idx_rate_limits_lookup13 ON public.rate_limits (identifier, endpoint, window_start);1415-- RLS: only service role can access this table16ALTER TABLE public.rate_limits ENABLE ROW LEVEL SECURITY;17-- No policies = no access via anon/authenticated roles18-- Only service role (in Edge Functions) can read/writeExpected result: A rate_limits table is created that only Edge Functions with the service role key can access.
Create a PostgreSQL function for atomic rate checking
Create a PostgreSQL function for atomic rate checking
Create a PostgreSQL function that atomically checks and increments the request count for a given identifier. The function uses a fixed-window approach: it counts requests in the current minute (or whatever window size you choose). If the count exceeds the limit, it returns false. Otherwise, it increments the count and returns true. Using a database function ensures atomicity even under concurrent requests.
1CREATE OR REPLACE FUNCTION public.check_rate_limit(2 p_identifier text,3 p_endpoint text,4 p_max_requests integer DEFAULT 100,5 p_window_seconds integer DEFAULT 606)7RETURNS boolean8LANGUAGE plpgsql9SECURITY DEFINER10SET search_path = ''11AS $$12DECLARE13 v_window_start timestamptz;14 v_current_count integer;15BEGIN16 -- Calculate the start of the current window17 v_window_start := date_trunc('minute', now());1819 -- Try to insert or increment atomically20 INSERT INTO public.rate_limits (identifier, endpoint, request_count, window_start)21 VALUES (p_identifier, p_endpoint, 1, v_window_start)22 ON CONFLICT (identifier, endpoint, window_start)23 DO UPDATE SET request_count = public.rate_limits.request_count + 124 RETURNING request_count INTO v_current_count;2526 -- Check if over limit27 IF v_current_count > p_max_requests THEN28 RETURN false;29 END IF;3031 RETURN true;32END;33$$;3435-- Only allow service role to execute36REVOKE EXECUTE ON FUNCTION public.check_rate_limit FROM public, anon, authenticated;Expected result: The function returns true if the request is allowed or false if the rate limit has been exceeded.
Build a rate-limiting Edge Function
Build a rate-limiting Edge Function
Create a Supabase Edge Function that acts as a middleware layer for rate-limited endpoints. The function extracts the user's identifier (from the JWT or IP address), calls the check_rate_limit database function, and either forwards the request or returns a 429 error. This pattern lets you add custom rate limiting to any endpoint without modifying your application code.
1// supabase/functions/rate-limited-api/index.ts2import { createClient } from 'npm:@supabase/supabase-js@2'34const corsHeaders = {5 'Access-Control-Allow-Origin': '*',6 'Access-Control-Allow-Headers': 'authorization, x-client-info, apikey, content-type',7}89Deno.serve(async (req) => {10 if (req.method === 'OPTIONS') {11 return new Response('ok', { headers: corsHeaders })12 }1314 // Use service role to check rate limits15 const supabase = createClient(16 Deno.env.get('SUPABASE_URL')!,17 Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!18 )1920 // Get identifier from JWT or IP21 const authHeader = req.headers.get('Authorization')22 let identifier = req.headers.get('x-forwarded-for') || 'unknown'2324 if (authHeader) {25 const { data: { user } } = await supabase.auth.getUser(26 authHeader.replace('Bearer ', '')27 )28 if (user) identifier = user.id29 }3031 // Check rate limit: 100 requests per minute32 const { data: allowed } = await supabase.rpc('check_rate_limit', {33 p_identifier: identifier,34 p_endpoint: '/api/data',35 p_max_requests: 100,36 p_window_seconds: 60,37 })3839 if (!allowed) {40 return new Response(41 JSON.stringify({ error: 'Rate limit exceeded. Try again in 60 seconds.' }),42 {43 status: 429,44 headers: {45 ...corsHeaders,46 'Content-Type': 'application/json',47 'Retry-After': '60',48 },49 }50 )51 }5253 // Process the actual request54 const { data, error } = await supabase.from('your_table').select('*')5556 return new Response(JSON.stringify(data), {57 headers: { ...corsHeaders, 'Content-Type': 'application/json' },58 })59})Expected result: The Edge Function checks the rate limit before processing the request and returns 429 if the limit is exceeded.
Implement client-side rate limit handling
Implement client-side rate limit handling
Your frontend should handle 429 responses gracefully instead of showing raw error messages to users. Implement exponential backoff for automatic retries and show a user-friendly message when the limit is reached. For mutation-heavy operations like form submissions, add client-side throttling to prevent rapid-fire requests that would hit the server limit.
1import { supabase } from '@/lib/supabase'23// Wrapper with retry logic for rate-limited endpoints4async function fetchWithRetry<T>(5 fn: () => Promise<{ data: T | null; error: any }>,6 maxRetries: number = 37): Promise<T> {8 for (let attempt = 0; attempt <= maxRetries; attempt++) {9 const { data, error } = await fn()1011 if (!error) return data as T1213 // Check for rate limit error14 if (error.message?.includes('429') || error.status === 429) {15 if (attempt < maxRetries) {16 const waitTime = Math.pow(2, attempt) * 1000 // 1s, 2s, 4s17 console.log(`Rate limited. Retrying in ${waitTime}ms...`)18 await new Promise(resolve => setTimeout(resolve, waitTime))19 continue20 }21 throw new Error('Rate limit exceeded. Please wait a moment and try again.')22 }2324 throw error25 }26 throw new Error('Max retries exceeded')27}2829// Usage30const data = await fetchWithRetry(() =>31 supabase.from('projects').select('*')32)Expected result: The client automatically retries rate-limited requests with exponential backoff and shows a user-friendly message when the limit is reached.
Clean up old rate limit records
Clean up old rate limit records
The rate_limits table will grow over time. Set up a pg_cron job to delete old records that are outside the rate limit window. This keeps the table small and queries fast. A daily cleanup job that removes records older than 1 hour is usually sufficient since your rate limit windows are much shorter.
1-- Enable pg_cron extension (if not already enabled)2CREATE EXTENSION IF NOT EXISTS pg_cron;34-- Schedule cleanup: delete rate limit records older than 1 hour5SELECT cron.schedule(6 'cleanup-rate-limits',7 '*/15 * * * *', -- Every 15 minutes8 $$DELETE FROM public.rate_limits WHERE window_start < now() - interval '1 hour'$$9);Expected result: Old rate limit records are automatically deleted every 15 minutes, keeping the table small and queries fast.
Complete working example
1-- =============================================2-- Complete rate limiting setup for Supabase3-- =============================================45-- 1. Create the rate limits tracking table6CREATE TABLE public.rate_limits (7 id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,8 identifier text NOT NULL,9 endpoint text NOT NULL,10 request_count integer DEFAULT 1,11 window_start timestamptz DEFAULT date_trunc('minute', now()),12 UNIQUE(identifier, endpoint, window_start)13);1415CREATE INDEX idx_rate_limits_lookup16 ON public.rate_limits (identifier, endpoint, window_start);1718-- RLS enabled with no policies = service role only19ALTER TABLE public.rate_limits ENABLE ROW LEVEL SECURITY;2021-- 2. Atomic rate check function22CREATE OR REPLACE FUNCTION public.check_rate_limit(23 p_identifier text,24 p_endpoint text,25 p_max_requests integer DEFAULT 100,26 p_window_seconds integer DEFAULT 6027)28RETURNS boolean29LANGUAGE plpgsql30SECURITY DEFINER31SET search_path = ''32AS $$33DECLARE34 v_window_start timestamptz;35 v_current_count integer;36BEGIN37 v_window_start := date_trunc('minute', now());3839 INSERT INTO public.rate_limits (identifier, endpoint, request_count, window_start)40 VALUES (p_identifier, p_endpoint, 1, v_window_start)41 ON CONFLICT (identifier, endpoint, window_start)42 DO UPDATE SET request_count = public.rate_limits.request_count + 143 RETURNING request_count INTO v_current_count;4445 IF v_current_count > p_max_requests THEN46 RETURN false;47 END IF;4849 RETURN true;50END;51$$;5253REVOKE EXECUTE ON FUNCTION public.check_rate_limit FROM public, anon, authenticated;5455-- 3. Scheduled cleanup (requires pg_cron extension)56CREATE EXTENSION IF NOT EXISTS pg_cron;5758SELECT cron.schedule(59 'cleanup-rate-limits',60 '*/15 * * * *',61 $$DELETE FROM public.rate_limits WHERE window_start < now() - interval '1 hour'$$62);Common mistakes when limiting API Requests in Supabase
Why it's a problem: Implementing rate limiting only on the client side, which can be easily bypassed
How to avoid: Always enforce rate limits on the server side (Edge Functions or database functions). Client-side throttling is a UX improvement, not a security measure.
Why it's a problem: Exposing the rate_limits table through the API by adding RLS policies for anon or authenticated roles
How to avoid: Keep the table locked down with RLS enabled and no policies. Only the service role key (used in Edge Functions) should access it.
Why it's a problem: Not handling 429 responses in client code, causing crashes or confusing error messages
How to avoid: Add a retry wrapper with exponential backoff for API calls. Show user-friendly messages when rate limits are hit.
Why it's a problem: Forgetting to clean up old rate limit records, causing the table to grow indefinitely
How to avoid: Set up a pg_cron job to delete records older than your rate limit window. Run it every 15 minutes for a clean table.
Best practices
- Enforce rate limits on the server side — client-side throttling is a UX improvement, not a security measure
- Use atomic database operations (INSERT ON CONFLICT) to prevent race conditions in concurrent requests
- Include Retry-After headers in 429 responses so clients know when to retry
- Implement different rate limits per endpoint based on resource cost and abuse potential
- Use user ID for authenticated endpoints and IP address for public endpoints as the rate limit identifier
- Clean up old rate limit records with pg_cron to keep the tracking table small
- Log rate limit hits for monitoring and adjusting limits based on actual usage patterns
- Consider tiered rate limits based on user subscription plans for SaaS applications
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I need to implement API rate limiting in my Supabase project. Show me how to create a rate limiting table, a PostgreSQL function for atomic rate checking, and a Supabase Edge Function that enforces per-user rate limits of 100 requests per minute.
Create a complete rate limiting system in Supabase using a rate_limits table, a check_rate_limit PostgreSQL function, and an Edge Function that returns 429 errors when the limit is exceeded. Include pg_cron cleanup for old records.
Frequently asked questions
What are the default rate limits in Supabase?
The free plan allows approximately 500 requests per second and the Pro plan allows approximately 1000 requests per second to the REST API. Auth endpoints have additional limits, and the default email rate limit is 2 per hour without custom SMTP.
Can I increase the built-in Supabase rate limits?
Upgrading to a higher plan increases the limits. For custom limits beyond what the plans offer, contact Supabase support for Enterprise-level configuration, or implement custom rate limiting with Edge Functions.
Should I rate limit by IP address or user ID?
Use user ID for authenticated endpoints because it is more accurate (IPs can be shared by multiple users). Use IP address for public endpoints where users are not authenticated. You can combine both for defense in depth.
How do I handle rate limits in a React application?
Wrap API calls in a retry function with exponential backoff. When a 429 response is received, wait for the duration specified in the Retry-After header before retrying. Show a user-friendly message during the wait.
Does rate limiting work with Supabase realtime subscriptions?
The built-in rate limits apply to REST API calls, not WebSocket subscriptions. Realtime connections have their own limits based on concurrent connections per project (200 on free, 500 on Pro).
Can I set different rate limits for different API endpoints?
Yes. The custom rate limiting approach in this tutorial uses an endpoint parameter in the check_rate_limit function. Pass different max_requests values for different endpoints based on their resource cost.
Can RapidDev help me implement rate limiting for my Supabase API?
Yes. RapidDev can design and implement custom rate limiting strategies including per-user limits, tiered plans, abuse detection, and monitoring for your Supabase-based application.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation