To view slow queries in Supabase, navigate to the SQL Editor in your Dashboard and query the pg_stat_statements extension, which logs execution statistics for every query. You can also use EXPLAIN ANALYZE on individual queries to see their execution plan. The Supabase Dashboard includes a Performance Advisor that flags slow queries and suggests indexes automatically.
Finding and Analyzing Slow Queries in Supabase
Slow database queries are one of the most common performance issues in Supabase projects. This tutorial teaches you how to identify slow queries using pg_stat_statements, analyze their execution plans with EXPLAIN ANALYZE, and use the Supabase Dashboard performance tools. You will also learn how to add indexes to fix the most common performance bottlenecks.
Prerequisites
- A Supabase project with tables and data
- Access to the SQL Editor in the Supabase Dashboard
- Basic understanding of SQL queries
Step-by-step guide
Enable the pg_stat_statements extension
Enable the pg_stat_statements extension
The pg_stat_statements extension tracks execution statistics for every SQL query that runs against your database. It records the total execution time, number of calls, rows returned, and more. On most Supabase projects this extension is already enabled. If not, you can enable it from the SQL Editor. Once enabled, it begins collecting data immediately for all new queries.
1-- Enable the extension (if not already active)2create extension if not exists pg_stat_statements;34-- Verify it is enabled5select * from pg_available_extensions where name = 'pg_stat_statements';Expected result: The pg_stat_statements extension is listed as installed and active.
Query pg_stat_statements for the slowest queries
Query pg_stat_statements for the slowest queries
Once the extension is active, query it to find your slowest queries sorted by total execution time. The mean_exec_time column shows the average time per call, while total_exec_time shows the cumulative time spent on that query pattern. Focus on queries with high mean_exec_time and high calls — these are the biggest performance gains. The query text is normalized, with literal values replaced by $1, $2 parameters.
1-- Find the 10 slowest queries by average execution time2select3 round(mean_exec_time::numeric, 2) as avg_time_ms,4 calls,5 round(total_exec_time::numeric, 2) as total_time_ms,6 rows,7 query8from pg_stat_statements9order by mean_exec_time desc10limit 10;Expected result: A list of the 10 slowest queries with their average execution time, call count, and total time.
Use EXPLAIN ANALYZE on a specific slow query
Use EXPLAIN ANALYZE on a specific slow query
Once you identify a slow query, use EXPLAIN ANALYZE to see its execution plan. This shows you exactly how PostgreSQL processes the query: which indexes it uses (or does not use), whether it performs sequential scans on large tables, and where time is spent. The output includes the planned cost, actual execution time, and the number of rows processed at each step. Look for Seq Scan on large tables — this is the most common cause of slow queries.
1-- Run EXPLAIN ANALYZE on a query (uses real data, safe for SELECT)2explain analyze3select * from todos4where user_id = 'some-uuid-here'5and is_complete = false6order by created_at desc;Expected result: A detailed execution plan showing whether indexes are used, the scan type (Index Scan vs Seq Scan), and actual execution times.
Check the Supabase Dashboard Performance Advisor
Check the Supabase Dashboard Performance Advisor
The Supabase Dashboard includes a built-in Performance Advisor under the Database section. It automatically analyzes your query patterns and suggests indexes for the most common slow queries. Navigate to your project Dashboard, click on Database in the left sidebar, and look for the Performance or Advisors section. It shows unused indexes (wasting space), missing indexes (causing slow scans), and tables without RLS (security risk).
Expected result: The Dashboard shows a list of performance recommendations including suggested indexes and unused index cleanup.
Add indexes to fix slow queries
Add indexes to fix slow queries
Based on the EXPLAIN ANALYZE output and Dashboard recommendations, create indexes on columns used in WHERE clauses, JOIN conditions, and ORDER BY. A btree index (the default) is suitable for equality and range comparisons. For text search, use GIN indexes. For JSON queries, use GIN on jsonb columns. Always test the query performance before and after adding the index by running EXPLAIN ANALYZE again.
1-- Add an index on user_id for the todos table2create index idx_todos_user_id on public.todos using btree (user_id);34-- Add a composite index for common query patterns5create index idx_todos_user_complete on public.todos6 using btree (user_id, is_complete);78-- Verify the index is used9explain analyze10select * from todos11where user_id = 'some-uuid-here'12and is_complete = false;Expected result: The EXPLAIN ANALYZE output now shows Index Scan instead of Seq Scan, and the execution time is significantly reduced.
Reset statistics and monitor ongoing performance
Reset statistics and monitor ongoing performance
After optimizing queries, reset the pg_stat_statements data to get a clean baseline. This lets you measure the impact of your changes and catch new slow queries as your application evolves. Schedule periodic reviews of slow queries as part of your maintenance routine, especially after adding new features that introduce new query patterns.
1-- Reset all statistics (requires superuser or pg_stat_statements_reset permission)2select pg_stat_statements_reset();34-- After some traffic, check again for new slow queries5select6 round(mean_exec_time::numeric, 2) as avg_time_ms,7 calls,8 query9from pg_stat_statements10where mean_exec_time > 100 -- queries slower than 100ms11order by mean_exec_time desc12limit 10;Expected result: Statistics are reset and you have a clean baseline to measure future performance improvements.
Complete working example
1-- Enable pg_stat_statements extension2create extension if not exists pg_stat_statements;34-- Find the 10 slowest queries by average execution time5select6 round(mean_exec_time::numeric, 2) as avg_time_ms,7 calls,8 round(total_exec_time::numeric, 2) as total_time_ms,9 rows,10 left(query, 200) as query_preview11from pg_stat_statements12order by mean_exec_time desc13limit 10;1415-- Find queries consuming the most total time16select17 round(total_exec_time::numeric, 2) as total_time_ms,18 calls,19 round(mean_exec_time::numeric, 2) as avg_time_ms,20 left(query, 200) as query_preview21from pg_stat_statements22order by total_exec_time desc23limit 10;2425-- Analyze a specific query26explain analyze27select * from todos28where user_id = 'example-uuid'29and is_complete = false30order by created_at desc;3132-- Create indexes based on findings33create index if not exists idx_todos_user_id34 on public.todos using btree (user_id);3536create index if not exists idx_todos_user_complete37 on public.todos using btree (user_id, is_complete);3839-- Verify index usage40explain analyze41select * from todos42where user_id = 'example-uuid'43and is_complete = false44order by created_at desc;4546-- Reset statistics for clean baseline47select pg_stat_statements_reset();Common mistakes when viewing Slow Queries in Supabase
Why it's a problem: Running EXPLAIN without ANALYZE, which only shows the estimated plan without actual execution times
How to avoid: Always use EXPLAIN ANALYZE for real performance data. Plain EXPLAIN shows estimated costs but not actual execution times.
Why it's a problem: Adding indexes on every column without checking if they are actually used in queries
How to avoid: Only add indexes on columns that appear in WHERE, JOIN, and ORDER BY clauses of your most frequent or slowest queries. Check pg_stat_user_indexes to find unused indexes.
Why it's a problem: Not wrapping auth.uid() in a select subquery in RLS policies, causing per-row function evaluation that slows every query
How to avoid: Use (select auth.uid()) instead of auth.uid() in RLS policies. The select wrapper enables per-statement caching, which is much faster on large tables.
Best practices
- Enable pg_stat_statements on every Supabase project to continuously track query performance
- Review the top 10 slowest queries weekly as your application grows
- Use EXPLAIN ANALYZE to verify that indexes are being used before and after adding them
- Create composite indexes for queries that filter on multiple columns simultaneously
- Use the Supabase Dashboard Performance Advisor for automated index recommendations
- Wrap auth.uid() in select subqueries in RLS policies for per-statement caching
- Remove unused indexes periodically — they slow down write operations without benefiting reads
- Set up monitoring alerts for queries that exceed a threshold like 500ms average execution time
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
My Supabase app is getting slow as data grows. Show me how to find the slowest queries using pg_stat_statements, analyze them with EXPLAIN ANALYZE, and add the right indexes to fix them.
Help me identify slow queries in my Supabase project. Show me the SQL to query pg_stat_statements for the slowest queries, explain how to read an EXPLAIN ANALYZE output, and suggest which indexes to create.
Frequently asked questions
Is pg_stat_statements enabled by default on Supabase?
On most Supabase projects it is enabled by default. If not, you can enable it by running CREATE EXTENSION IF NOT EXISTS pg_stat_statements in the SQL Editor.
Does EXPLAIN ANALYZE modify my data?
EXPLAIN ANALYZE executes the query for real. For SELECT queries this is safe. For INSERT, UPDATE, or DELETE, wrap the EXPLAIN ANALYZE in a transaction and ROLLBACK to avoid making changes.
How many indexes should I create on a table?
Only create indexes that are actually used by your query patterns. Each index adds overhead to INSERT and UPDATE operations. A table with 3-5 well-chosen indexes is typical. Check pg_stat_user_indexes to find and remove unused indexes.
Why does my query still do a sequential scan after I added an index?
PostgreSQL may choose a sequential scan if the table is small, the query returns most of the rows, or the statistics are outdated. Run ANALYZE on the table to update statistics, and check that your index matches the query's WHERE clause columns.
Do RLS policies slow down queries?
Yes, RLS policies add conditions to every query. The impact is minimal for simple policies, but complex policies with joins or function calls can significantly slow queries. Always use (select auth.uid()) instead of auth.uid() for per-statement caching.
Can RapidDev help optimize my Supabase database performance?
Yes. RapidDev can audit your Supabase database, identify slow queries, optimize RLS policies, and design an indexing strategy tailored to your application's query patterns.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation