Supabase provides built-in performance monitoring through the Dashboard. Use the Performance Advisor for automated index suggestions, the API metrics panel for request latency and error rates, and pg_stat_statements for detailed query-level analysis. For proactive monitoring, set up external tools like Grafana or Datadog using Supabase's log drain and metrics endpoints to get alerts before issues affect users.
Monitoring Your Supabase Project for Optimal Performance
Performance problems often creep in gradually — a missing index here, an N+1 query there. By the time users complain, the issue has been building for weeks. This tutorial shows you how to use Supabase's built-in monitoring tools to catch performance issues early, understand where time is being spent, and set up external alerting for production applications.
Prerequisites
- A Supabase project with some data and traffic (even development traffic)
- Access to the Supabase Dashboard
- Basic understanding of SQL and database indexes
- For external monitoring: a Grafana, Datadog, or similar observability account (optional)
Step-by-step guide
Check the Dashboard performance advisor
Check the Dashboard performance advisor
Supabase includes a built-in performance advisor that analyzes your database and suggests improvements. Go to Dashboard → Database → Performance Advisor (or Indexes). The advisor scans your tables, identifies missing indexes based on query patterns, and suggests which columns to index. It also flags unused indexes that waste storage and slow down writes. Review each suggestion and apply the ones relevant to your workload.
Expected result: You see a list of suggested indexes and performance improvements specific to your database usage patterns.
Analyze slow queries with pg_stat_statements
Analyze slow queries with pg_stat_statements
The pg_stat_statements extension tracks execution statistics for every SQL statement. It is pre-installed on all Supabase projects. Open the SQL Editor in Dashboard and query the view to find your slowest and most frequent queries. Look at mean_exec_time for average duration, calls for frequency, and total_exec_time for cumulative impact. Focus on queries with high total_exec_time first — they have the biggest overall impact.
1-- Find the 10 slowest queries by average execution time2SELECT3 calls,4 round(mean_exec_time::numeric, 2) AS avg_ms,5 round(total_exec_time::numeric, 2) AS total_ms,6 rows,7 substr(query, 1, 100) AS query_preview8FROM pg_stat_statements9ORDER BY mean_exec_time DESC10LIMIT 10;1112-- Find queries with the most cumulative time (biggest overall impact)13SELECT14 calls,15 round(mean_exec_time::numeric, 2) AS avg_ms,16 round(total_exec_time::numeric, 2) AS total_ms,17 substr(query, 1, 100) AS query_preview18FROM pg_stat_statements19ORDER BY total_exec_time DESC20LIMIT 10;2122-- Reset statistics (useful after adding indexes to measure improvement)23SELECT pg_stat_statements_reset();Expected result: You see a ranked list of queries by execution time, helping you identify which queries to optimize first.
Use EXPLAIN ANALYZE on problem queries
Use EXPLAIN ANALYZE on problem queries
Once you identify a slow query from pg_stat_statements, use EXPLAIN ANALYZE to see exactly how PostgreSQL executes it. This shows whether the query uses indexes (Index Scan) or reads every row (Seq Scan), how many rows each step processes, and where time is spent. Look for sequential scans on large tables — these are prime candidates for new indexes.
1-- Run EXPLAIN ANALYZE on a slow query2EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)3SELECT * FROM orders4WHERE user_id = 'some-uuid'5ORDER BY created_at DESC6LIMIT 20;78-- If you see "Seq Scan" on a large table, add an index:9CREATE INDEX idx_orders_user_id ON orders USING btree (user_id);1011-- For queries that sort, a composite index helps:12CREATE INDEX idx_orders_user_created ON orders (user_id, created_at DESC);Expected result: The query plan shows Index Scan instead of Seq Scan after adding the appropriate index, with significantly lower execution time.
Monitor API metrics in the Dashboard
Monitor API metrics in the Dashboard
Go to Dashboard → Reports → API to see request metrics for your Supabase REST API. This panel shows total requests, response times (p50, p95, p99), error rates, and request distribution by endpoint. Pay attention to the p95 and p99 latencies — these represent the experience of your slowest 5% and 1% of requests. If p99 is much higher than p50, you likely have intermittent slow queries or connection pool exhaustion.
Expected result: You can see real-time and historical API performance data including latency percentiles and error rates.
Set up external monitoring with log drains
Set up external monitoring with log drains
For production applications, Dashboard monitoring is not enough — you need alerts. Supabase supports log drains that forward logs to external services like Datadog, Grafana Cloud, or any HTTP endpoint. Go to Dashboard → Settings → Log Drains to configure a destination. Once connected, you can create alerts for error rate spikes, slow query thresholds, and connection pool saturation.
1-- Check current connection usage (important for monitoring)2SELECT3 count(*) AS total_connections,4 state,5 usename6FROM pg_stat_activity7GROUP BY state, usename8ORDER BY total_connections DESC;910-- Check database size11SELECT pg_size_pretty(pg_database_size(current_database())) AS db_size;1213-- Check table sizes for growth monitoring14SELECT15 schemaname,16 relname AS table_name,17 pg_size_pretty(pg_total_relation_size(relid)) AS total_size,18 n_live_tup AS row_count19FROM pg_stat_user_tables20ORDER BY pg_total_relation_size(relid) DESC21LIMIT 10;Expected result: Logs flow to your external monitoring service and you can configure alerts for performance thresholds.
Complete working example
1-- ================================================2-- Supabase Performance Monitoring SQL Toolkit3-- Run these queries in Dashboard → SQL Editor4-- ================================================56-- 1. Top 10 slowest queries by average time7SELECT8 calls,9 round(mean_exec_time::numeric, 2) AS avg_ms,10 round(total_exec_time::numeric, 2) AS total_ms,11 rows,12 substr(query, 1, 120) AS query_preview13FROM pg_stat_statements14ORDER BY mean_exec_time DESC15LIMIT 10;1617-- 2. Most frequently called queries18SELECT19 calls,20 round(mean_exec_time::numeric, 2) AS avg_ms,21 round(total_exec_time::numeric, 2) AS total_ms,22 substr(query, 1, 120) AS query_preview23FROM pg_stat_statements24ORDER BY calls DESC25LIMIT 10;2627-- 3. Current connection usage28SELECT29 count(*) AS total_connections,30 state,31 usename32FROM pg_stat_activity33GROUP BY state, usename34ORDER BY total_connections DESC;3536-- 4. Table sizes and row counts37SELECT38 schemaname,39 relname AS table_name,40 pg_size_pretty(pg_total_relation_size(relid)) AS total_size,41 n_live_tup AS estimated_rows42FROM pg_stat_user_tables43ORDER BY pg_total_relation_size(relid) DESC44LIMIT 15;4546-- 5. Index usage statistics (find unused indexes)47SELECT48 schemaname,49 relname AS table_name,50 indexrelname AS index_name,51 idx_scan AS times_used,52 pg_size_pretty(pg_relation_size(indexrelid)) AS index_size53FROM pg_stat_user_indexes54ORDER BY idx_scan ASC55LIMIT 10;5657-- 6. Cache hit ratio (should be > 99%)58SELECT59 round(60 100.0 * sum(heap_blks_hit) /61 nullif(sum(heap_blks_hit) + sum(heap_blks_read), 0),62 263 ) AS cache_hit_ratio_pct64FROM pg_statio_user_tables;6566-- 7. Database size67SELECT pg_size_pretty(pg_database_size(current_database())) AS db_size;Common mistakes when monitoring Performance in Supabase
Why it's a problem: Only checking average query time instead of p95/p99 percentiles
How to avoid: Averages hide outliers. A query averaging 50ms might have a p99 of 2 seconds. Use pg_stat_statements total_exec_time divided by calls for averages, and check the Dashboard API panel for percentile breakdowns.
Why it's a problem: Adding indexes based on guesses instead of actual query patterns
How to avoid: Always check pg_stat_statements first to find which queries are actually slow, then use EXPLAIN ANALYZE to confirm whether an index would help before creating one.
Why it's a problem: Ignoring connection pool usage until the database stops accepting connections
How to avoid: Monitor pg_stat_activity regularly. Set up alerts when connection count exceeds 80% of your plan limit. Enable Supavisor connection pooling for applications with many concurrent users.
Why it's a problem: Not resetting pg_stat_statements after making optimizations
How to avoid: Call SELECT pg_stat_statements_reset() after adding indexes or optimizing queries. This gives you a clean baseline to measure the improvement.
Best practices
- Check the Dashboard performance advisor weekly for new index suggestions as your data and queries evolve
- Focus optimization on queries with the highest total_exec_time in pg_stat_statements — they have the biggest overall impact
- Use EXPLAIN ANALYZE to confirm that new indexes are actually being used before considering the optimization done
- Monitor cache hit ratio and keep it above 99% — if it drops, your working set exceeds available memory
- Set up external log drains for production projects so you receive alerts for performance degradation
- Track connection usage and configure Supavisor pooling before hitting connection limits
- Review API error rates alongside latency — a spike in 500 errors often indicates database-level issues
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
My Supabase app is getting slower as data grows. Walk me through how to find the slowest queries using pg_stat_statements, analyze them with EXPLAIN, and add the right indexes to speed them up.
Show me the SQL queries I need to run in the Supabase SQL Editor to monitor database performance: slowest queries, most frequent queries, connection usage, table sizes, cache hit ratio, and unused indexes.
Frequently asked questions
Is pg_stat_statements enabled by default on Supabase?
Yes, pg_stat_statements is pre-installed and enabled on all Supabase projects. You can query it directly from the SQL Editor without any configuration.
How often should I check performance metrics?
For production applications, set up automated alerts via log drains. For development, review the performance advisor and slow queries weekly, and always check after deploying new features that change query patterns.
What is a good cache hit ratio for Supabase?
A cache hit ratio above 99% is healthy. If it drops below 95%, your working set of data exceeds the available RAM on your plan. Consider upgrading your compute add-on or optimizing queries to access less data.
Do RLS policies affect performance?
Yes. Poorly written RLS policies can cause significant slowdowns, especially if they contain joins or call functions per-row. Wrap auth.uid() in a SELECT subquery for per-statement caching, add indexes on columns used in policies, and avoid joins inside policy expressions.
Can I monitor Edge Function performance separately?
Yes. Go to Dashboard → Edge Functions → select a function → Logs. You can see invocation count, execution time, error rates, and cold start frequency. For detailed monitoring, add custom logging with console.log inside your functions.
Can RapidDev help optimize my Supabase database performance?
Yes. RapidDev can audit your database schema, query patterns, and RLS policies to identify performance bottlenecks, add proper indexes, and set up monitoring dashboards for ongoing observability.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation