To speed up queries in Supabase, start by adding indexes on columns used in WHERE clauses, JOIN conditions, and RLS policies. Use EXPLAIN ANALYZE in the SQL Editor to identify slow queries and missing indexes. Enable connection pooling via Supavisor for serverless workloads, select only the columns you need instead of using select('*'), and wrap auth.uid() calls in RLS policies with a SELECT subquery to enable per-statement caching.
Optimizing Query Performance in Supabase
Supabase is powered by PostgreSQL, which means you have access to the full range of PostgreSQL performance optimization tools. Slow queries in Supabase usually come from missing indexes, inefficient RLS policies that evaluate per-row instead of per-statement, selecting too many columns, or connection exhaustion in serverless environments. This tutorial covers the practical techniques that make the biggest impact on query speed.
Prerequisites
- A Supabase project with tables containing data
- Access to the Supabase Dashboard SQL Editor
- Basic understanding of SQL queries and indexes
- The Supabase JS client for testing query performance
Step-by-step guide
Identify slow queries with EXPLAIN ANALYZE
Identify slow queries with EXPLAIN ANALYZE
The first step in optimization is measurement. Use EXPLAIN ANALYZE in the SQL Editor to see how PostgreSQL executes a query, including the execution plan, time per step, and whether indexes are being used. Look for Sequential Scan on large tables — this means PostgreSQL is reading every row instead of using an index. The actual time shows milliseconds for each operation.
1-- Run EXPLAIN ANALYZE on your slow query2EXPLAIN ANALYZE3SELECT * FROM orders4WHERE user_id = 'some-uuid'5AND status = 'pending'6ORDER BY created_at DESC7LIMIT 20;89-- Look for:10-- Seq Scan (bad for large tables — needs an index)11-- Index Scan (good — using an index)12-- actual time= (milliseconds per operation)13-- rows= (how many rows were processed)Expected result: EXPLAIN ANALYZE shows the execution plan. Sequential Scans on large tables indicate where indexes are needed.
Add indexes on frequently filtered columns
Add indexes on frequently filtered columns
Create B-tree indexes on columns that appear in WHERE clauses, ORDER BY, and JOIN conditions. For composite queries that filter on multiple columns, create a multi-column index. The order of columns in a composite index matters — put the most selective column first. For columns used in RLS policies (like user_id), an index is especially critical because the policy is evaluated for every query.
1-- Single column index for user_id lookups2CREATE INDEX idx_orders_user_id3ON orders USING btree (user_id);45-- Composite index for multi-column filters6CREATE INDEX idx_orders_user_status7ON orders USING btree (user_id, status);89-- Index for sorting (ORDER BY)10CREATE INDEX idx_orders_created_at11ON orders USING btree (created_at DESC);1213-- Covering index (includes all selected columns to avoid table lookup)14CREATE INDEX idx_orders_user_covering15ON orders USING btree (user_id)16INCLUDE (status, total, created_at);1718-- Partial index (only indexes rows matching a condition)19CREATE INDEX idx_orders_pending20ON orders USING btree (user_id)21WHERE status = 'pending';Expected result: After adding indexes, re-run EXPLAIN ANALYZE. Sequential Scans should change to Index Scans, and execution time should decrease.
Optimize RLS policies for performance
Optimize RLS policies for performance
RLS policies are evaluated for every query, so poorly written policies can severely impact performance. The biggest optimization is wrapping function calls like auth.uid() in a SELECT subquery. This tells PostgreSQL to evaluate the function once per statement and cache the result, instead of evaluating it for every row. Also avoid JOINs in policies — use IN or EXISTS with subqueries instead.
1-- SLOW: auth.uid() evaluated per-row2CREATE POLICY "slow_policy" ON orders FOR SELECT3TO authenticated4USING (auth.uid() = user_id);56-- FAST: (SELECT auth.uid()) cached per-statement7CREATE POLICY "fast_policy" ON orders FOR SELECT8TO authenticated9USING ((SELECT auth.uid()) = user_id);1011-- SLOW: JOIN in policy12CREATE POLICY "slow_team_policy" ON documents FOR SELECT13TO authenticated14USING (15 EXISTS (16 SELECT 1 FROM team_members17 WHERE team_members.user_id = auth.uid()18 AND team_members.team_id = documents.team_id19 )20);2122-- FASTER: Subquery with cached auth.uid()23CREATE POLICY "fast_team_policy" ON documents FOR SELECT24TO authenticated25USING (26 team_id IN (27 SELECT team_id FROM team_members28 WHERE user_id = (SELECT auth.uid())29 )30);Expected result: Queries with RLS are noticeably faster. The policy evaluation overhead drops from O(n) to O(1) for the auth function call.
Select only the columns you need
Select only the columns you need
Using select('*') in the JS client fetches every column from the table, which increases data transfer time, memory usage, and can prevent PostgreSQL from using covering indexes. Specify only the columns you need. For related data, use the nested select syntax to fetch joined data in a single query instead of making multiple round trips.
1import { createClient } from '@supabase/supabase-js';23const supabase = createClient(4 process.env.NEXT_PUBLIC_SUPABASE_URL!,5 process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!6);78// SLOW: Fetches all columns9const { data } = await supabase.from('orders').select('*');1011// FAST: Fetches only needed columns12const { data: optimized } = await supabase13 .from('orders')14 .select('id, status, total, created_at');1516// FAST: Fetch related data in one query (no extra round trip)17const { data: withItems } = await supabase18 .from('orders')19 .select('id, total, order_items(product_name, quantity)')20 .eq('user_id', userId)21 .order('created_at', { ascending: false })22 .limit(20);Expected result: Queries return faster with smaller payloads. Network transfer time decreases proportionally to the data reduction.
Enable connection pooling for serverless workloads
Enable connection pooling for serverless workloads
Serverless functions create new database connections on every invocation, which can quickly exhaust the connection limit. Supabase provides Supavisor, a connection pooler, accessible on port 6543 in transaction mode. Use the pooled connection string for serverless environments (Vercel, Netlify, Edge Functions) and the direct connection string for long-running processes.
1// For serverless environments (Vercel, Netlify, etc.)2// Use the pooled connection string (port 6543)3// Found in Dashboard → Settings → Database → Connection string → Transaction mode45// .env for serverless6DATABASE_URL="postgresql://postgres.xxxx:password@aws-0-us-east-1.pooler.supabase.com:6543/postgres"78// .env for long-running processes (direct connection)9DATABASE_URL="postgresql://postgres.xxxx:password@aws-0-us-east-1.pooler.supabase.com:5432/postgres"1011// For Prisma with connection pooling, add pgbouncer=true12// DATABASE_URL="postgresql://...@...pooler.supabase.com:6543/postgres?pgbouncer=true"Expected result: Serverless functions share database connections through the pooler instead of each creating a new connection.
Use specialized indexes for full-text search and JSON queries
Use specialized indexes for full-text search and JSON queries
B-tree indexes work for equality and range comparisons, but other query patterns need specialized index types. Use GIN indexes for full-text search (tsvector columns), JSONB containment queries, and array operations. Use GiST indexes for geometric and range data types. These indexes can dramatically speed up queries that B-tree cannot optimize.
1-- GIN index for full-text search2CREATE INDEX idx_articles_fts3ON articles USING gin (to_tsvector('english', title || ' ' || body));45-- GIN index for JSONB containment queries6CREATE INDEX idx_products_metadata7ON products USING gin (metadata);89-- Usage: Find products with specific metadata10SELECT * FROM products11WHERE metadata @> '{"category": "electronics"}';1213-- GIN index for array columns14CREATE INDEX idx_posts_tags15ON posts USING gin (tags);1617-- Usage: Find posts with a specific tag18SELECT * FROM posts19WHERE tags @> ARRAY['supabase'];Expected result: Full-text search and JSONB queries use the GIN index instead of sequential scans, reducing query time from seconds to milliseconds on large tables.
Complete working example
1-- Supabase Query Performance Optimization Script2-- Run these in the SQL Editor to optimize your database34-- 1. Find tables missing indexes on frequently queried columns5SELECT6 schemaname,7 relname AS table_name,8 seq_scan,9 seq_tup_read,10 idx_scan,11 n_live_tup AS row_count12FROM pg_stat_user_tables13WHERE seq_scan > 10014 AND n_live_tup > 100015 AND idx_scan < seq_scan16ORDER BY seq_tup_read DESC;1718-- 2. Create indexes for common query patterns19CREATE INDEX IF NOT EXISTS idx_orders_user_id20ON orders USING btree (user_id);2122CREATE INDEX IF NOT EXISTS idx_orders_status_created23ON orders USING btree (status, created_at DESC);2425CREATE INDEX IF NOT EXISTS idx_profiles_user_id26ON profiles USING btree (id);2728-- 3. Optimize RLS policies (drop slow, create fast)29-- DROP POLICY IF EXISTS "slow_policy" ON orders;30CREATE POLICY "optimized_select" ON orders FOR SELECT31TO authenticated32USING ((SELECT auth.uid()) = user_id);3334-- 4. Check index usage statistics35SELECT36 indexrelname AS index_name,37 relname AS table_name,38 idx_scan AS times_used,39 pg_size_pretty(pg_relation_size(indexrelid)) AS index_size40FROM pg_stat_user_indexes41ORDER BY idx_scan DESC;4243-- 5. Find unused indexes (candidates for removal)44SELECT45 indexrelname AS index_name,46 relname AS table_name,47 idx_scan AS times_used,48 pg_size_pretty(pg_relation_size(indexrelid)) AS index_size49FROM pg_stat_user_indexes50WHERE idx_scan = 051 AND indexrelname NOT LIKE '%pkey%'52ORDER BY pg_relation_size(indexrelid) DESC;5354-- 6. Analyze tables to update statistics for the query planner55ANALYZE orders;56ANALYZE profiles;Common mistakes when increasing Query Speed in Supabase
Why it's a problem: Using auth.uid() directly in RLS policies instead of wrapping it in (SELECT auth.uid())
How to avoid: Always use (SELECT auth.uid()) in RLS policies. The SELECT wrapper enables per-statement caching, avoiding re-evaluation for every row.
Why it's a problem: Creating too many indexes, which slows down INSERT, UPDATE, and DELETE operations
How to avoid: Only index columns that appear in frequent WHERE, ORDER BY, and JOIN clauses. Check pg_stat_user_indexes to find and remove unused indexes.
Why it's a problem: Using select('*') in the JS client when only a few columns are needed
How to avoid: Specify only the columns you need: select('id, name, status'). This reduces data transfer and can enable covering index usage.
Why it's a problem: Using direct database connections in serverless functions instead of the connection pooler
How to avoid: Use the pooled connection string (port 6543) for serverless workloads to share connections and avoid hitting the max connections limit.
Best practices
- Run EXPLAIN ANALYZE on slow queries before making optimization changes to establish a baseline
- Add btree indexes on columns used in WHERE clauses, especially user_id columns referenced in RLS policies
- Always use (SELECT auth.uid()) instead of auth.uid() in RLS policies for per-statement caching
- Select only the columns you need instead of using select('*')
- Use the pooled connection string for serverless environments and direct connections for long-running processes
- Run ANALYZE on tables after bulk data changes to update the query planner's statistics
- Monitor the Dashboard Query Performance page regularly to catch new slow queries
- Use partial indexes for queries that frequently filter on a specific value (WHERE status = 'active')
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
My Supabase queries are slow on a table with 100,000+ rows. The table has RLS enabled with policies using auth.uid(). Walk me through using EXPLAIN ANALYZE, creating the right indexes, optimizing my RLS policies, and checking connection pooling configuration.
Analyze my Supabase database for performance issues. Find tables with high sequential scan counts, suggest indexes based on common query patterns, and generate optimized RLS policies that use SELECT-wrapped function calls for per-statement caching.
Frequently asked questions
How do I know which columns to index?
Index columns that appear in WHERE clauses, ORDER BY, JOIN conditions, and RLS policies. Use EXPLAIN ANALYZE to confirm a query uses Sequential Scan on a large table — that column needs an index.
Do indexes slow down writes?
Yes, every index adds a small overhead to INSERT, UPDATE, and DELETE operations because the index must be updated too. The tradeoff is almost always worth it for read-heavy workloads, but avoid creating unnecessary indexes.
What is the difference between btree and GIN indexes?
B-tree indexes are best for equality (=), range (<, >), and sorting queries. GIN indexes are best for full-text search, JSONB containment (@>), and array operations. Use the index type that matches your query pattern.
Why are my queries fast in the SQL Editor but slow from the API?
The SQL Editor runs as the postgres superuser which bypasses RLS. API queries run as anon or authenticated, which must evaluate RLS policies. Optimize your RLS policies by wrapping function calls in SELECT subqueries and adding indexes on policy columns.
How many connections does my Supabase plan allow?
The free plan allows 60 direct connections, Pro allows 200, and higher plans scale further. Using the connection pooler (port 6543) lets you handle many more concurrent clients by sharing a smaller pool of actual database connections.
Can RapidDev help optimize my Supabase database performance?
Yes. RapidDev can audit your database schema, query patterns, and RLS policies to identify performance bottlenecks, create optimized indexes, and configure connection pooling for your specific workload.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation