Skip to main content
RapidDev - Software Development Agency
supabase-tutorial

How to Monitor Performance in Supabase

Supabase provides built-in performance monitoring through the Dashboard. Use the Performance Advisor for automated index suggestions, the API metrics panel for request latency and error rates, and pg_stat_statements for detailed query-level analysis. For proactive monitoring, set up external tools like Grafana or Datadog using Supabase's log drain and metrics endpoints to get alerts before issues affect users.

What you'll learn

  • How to use the Dashboard performance advisor and database health indicators
  • How to identify slow queries using pg_stat_statements
  • How to monitor API request latency and error rates
  • How to set up external monitoring with log drains
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner7 min read15-20 minSupabase (all plans, some features Pro+ only)March 2026RapidDev Engineering Team
TL;DR

Supabase provides built-in performance monitoring through the Dashboard. Use the Performance Advisor for automated index suggestions, the API metrics panel for request latency and error rates, and pg_stat_statements for detailed query-level analysis. For proactive monitoring, set up external tools like Grafana or Datadog using Supabase's log drain and metrics endpoints to get alerts before issues affect users.

Monitoring Your Supabase Project for Optimal Performance

Performance problems often creep in gradually — a missing index here, an N+1 query there. By the time users complain, the issue has been building for weeks. This tutorial shows you how to use Supabase's built-in monitoring tools to catch performance issues early, understand where time is being spent, and set up external alerting for production applications.

Prerequisites

  • A Supabase project with some data and traffic (even development traffic)
  • Access to the Supabase Dashboard
  • Basic understanding of SQL and database indexes
  • For external monitoring: a Grafana, Datadog, or similar observability account (optional)

Step-by-step guide

1

Check the Dashboard performance advisor

Supabase includes a built-in performance advisor that analyzes your database and suggests improvements. Go to Dashboard → Database → Performance Advisor (or Indexes). The advisor scans your tables, identifies missing indexes based on query patterns, and suggests which columns to index. It also flags unused indexes that waste storage and slow down writes. Review each suggestion and apply the ones relevant to your workload.

Expected result: You see a list of suggested indexes and performance improvements specific to your database usage patterns.

2

Analyze slow queries with pg_stat_statements

The pg_stat_statements extension tracks execution statistics for every SQL statement. It is pre-installed on all Supabase projects. Open the SQL Editor in Dashboard and query the view to find your slowest and most frequent queries. Look at mean_exec_time for average duration, calls for frequency, and total_exec_time for cumulative impact. Focus on queries with high total_exec_time first — they have the biggest overall impact.

typescript
1-- Find the 10 slowest queries by average execution time
2SELECT
3 calls,
4 round(mean_exec_time::numeric, 2) AS avg_ms,
5 round(total_exec_time::numeric, 2) AS total_ms,
6 rows,
7 substr(query, 1, 100) AS query_preview
8FROM pg_stat_statements
9ORDER BY mean_exec_time DESC
10LIMIT 10;
11
12-- Find queries with the most cumulative time (biggest overall impact)
13SELECT
14 calls,
15 round(mean_exec_time::numeric, 2) AS avg_ms,
16 round(total_exec_time::numeric, 2) AS total_ms,
17 substr(query, 1, 100) AS query_preview
18FROM pg_stat_statements
19ORDER BY total_exec_time DESC
20LIMIT 10;
21
22-- Reset statistics (useful after adding indexes to measure improvement)
23SELECT pg_stat_statements_reset();

Expected result: You see a ranked list of queries by execution time, helping you identify which queries to optimize first.

3

Use EXPLAIN ANALYZE on problem queries

Once you identify a slow query from pg_stat_statements, use EXPLAIN ANALYZE to see exactly how PostgreSQL executes it. This shows whether the query uses indexes (Index Scan) or reads every row (Seq Scan), how many rows each step processes, and where time is spent. Look for sequential scans on large tables — these are prime candidates for new indexes.

typescript
1-- Run EXPLAIN ANALYZE on a slow query
2EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
3SELECT * FROM orders
4WHERE user_id = 'some-uuid'
5ORDER BY created_at DESC
6LIMIT 20;
7
8-- If you see "Seq Scan" on a large table, add an index:
9CREATE INDEX idx_orders_user_id ON orders USING btree (user_id);
10
11-- For queries that sort, a composite index helps:
12CREATE INDEX idx_orders_user_created ON orders (user_id, created_at DESC);

Expected result: The query plan shows Index Scan instead of Seq Scan after adding the appropriate index, with significantly lower execution time.

4

Monitor API metrics in the Dashboard

Go to Dashboard → Reports → API to see request metrics for your Supabase REST API. This panel shows total requests, response times (p50, p95, p99), error rates, and request distribution by endpoint. Pay attention to the p95 and p99 latencies — these represent the experience of your slowest 5% and 1% of requests. If p99 is much higher than p50, you likely have intermittent slow queries or connection pool exhaustion.

Expected result: You can see real-time and historical API performance data including latency percentiles and error rates.

5

Set up external monitoring with log drains

For production applications, Dashboard monitoring is not enough — you need alerts. Supabase supports log drains that forward logs to external services like Datadog, Grafana Cloud, or any HTTP endpoint. Go to Dashboard → Settings → Log Drains to configure a destination. Once connected, you can create alerts for error rate spikes, slow query thresholds, and connection pool saturation.

typescript
1-- Check current connection usage (important for monitoring)
2SELECT
3 count(*) AS total_connections,
4 state,
5 usename
6FROM pg_stat_activity
7GROUP BY state, usename
8ORDER BY total_connections DESC;
9
10-- Check database size
11SELECT pg_size_pretty(pg_database_size(current_database())) AS db_size;
12
13-- Check table sizes for growth monitoring
14SELECT
15 schemaname,
16 relname AS table_name,
17 pg_size_pretty(pg_total_relation_size(relid)) AS total_size,
18 n_live_tup AS row_count
19FROM pg_stat_user_tables
20ORDER BY pg_total_relation_size(relid) DESC
21LIMIT 10;

Expected result: Logs flow to your external monitoring service and you can configure alerts for performance thresholds.

Complete working example

performance-monitoring-queries.sql
1-- ================================================
2-- Supabase Performance Monitoring SQL Toolkit
3-- Run these queries in Dashboard SQL Editor
4-- ================================================
5
6-- 1. Top 10 slowest queries by average time
7SELECT
8 calls,
9 round(mean_exec_time::numeric, 2) AS avg_ms,
10 round(total_exec_time::numeric, 2) AS total_ms,
11 rows,
12 substr(query, 1, 120) AS query_preview
13FROM pg_stat_statements
14ORDER BY mean_exec_time DESC
15LIMIT 10;
16
17-- 2. Most frequently called queries
18SELECT
19 calls,
20 round(mean_exec_time::numeric, 2) AS avg_ms,
21 round(total_exec_time::numeric, 2) AS total_ms,
22 substr(query, 1, 120) AS query_preview
23FROM pg_stat_statements
24ORDER BY calls DESC
25LIMIT 10;
26
27-- 3. Current connection usage
28SELECT
29 count(*) AS total_connections,
30 state,
31 usename
32FROM pg_stat_activity
33GROUP BY state, usename
34ORDER BY total_connections DESC;
35
36-- 4. Table sizes and row counts
37SELECT
38 schemaname,
39 relname AS table_name,
40 pg_size_pretty(pg_total_relation_size(relid)) AS total_size,
41 n_live_tup AS estimated_rows
42FROM pg_stat_user_tables
43ORDER BY pg_total_relation_size(relid) DESC
44LIMIT 15;
45
46-- 5. Index usage statistics (find unused indexes)
47SELECT
48 schemaname,
49 relname AS table_name,
50 indexrelname AS index_name,
51 idx_scan AS times_used,
52 pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
53FROM pg_stat_user_indexes
54ORDER BY idx_scan ASC
55LIMIT 10;
56
57-- 6. Cache hit ratio (should be > 99%)
58SELECT
59 round(
60 100.0 * sum(heap_blks_hit) /
61 nullif(sum(heap_blks_hit) + sum(heap_blks_read), 0),
62 2
63 ) AS cache_hit_ratio_pct
64FROM pg_statio_user_tables;
65
66-- 7. Database size
67SELECT pg_size_pretty(pg_database_size(current_database())) AS db_size;

Common mistakes when monitoring Performance in Supabase

Why it's a problem: Only checking average query time instead of p95/p99 percentiles

How to avoid: Averages hide outliers. A query averaging 50ms might have a p99 of 2 seconds. Use pg_stat_statements total_exec_time divided by calls for averages, and check the Dashboard API panel for percentile breakdowns.

Why it's a problem: Adding indexes based on guesses instead of actual query patterns

How to avoid: Always check pg_stat_statements first to find which queries are actually slow, then use EXPLAIN ANALYZE to confirm whether an index would help before creating one.

Why it's a problem: Ignoring connection pool usage until the database stops accepting connections

How to avoid: Monitor pg_stat_activity regularly. Set up alerts when connection count exceeds 80% of your plan limit. Enable Supavisor connection pooling for applications with many concurrent users.

Why it's a problem: Not resetting pg_stat_statements after making optimizations

How to avoid: Call SELECT pg_stat_statements_reset() after adding indexes or optimizing queries. This gives you a clean baseline to measure the improvement.

Best practices

  • Check the Dashboard performance advisor weekly for new index suggestions as your data and queries evolve
  • Focus optimization on queries with the highest total_exec_time in pg_stat_statements — they have the biggest overall impact
  • Use EXPLAIN ANALYZE to confirm that new indexes are actually being used before considering the optimization done
  • Monitor cache hit ratio and keep it above 99% — if it drops, your working set exceeds available memory
  • Set up external log drains for production projects so you receive alerts for performance degradation
  • Track connection usage and configure Supavisor pooling before hitting connection limits
  • Review API error rates alongside latency — a spike in 500 errors often indicates database-level issues

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

My Supabase app is getting slower as data grows. Walk me through how to find the slowest queries using pg_stat_statements, analyze them with EXPLAIN, and add the right indexes to speed them up.

Supabase Prompt

Show me the SQL queries I need to run in the Supabase SQL Editor to monitor database performance: slowest queries, most frequent queries, connection usage, table sizes, cache hit ratio, and unused indexes.

Frequently asked questions

Is pg_stat_statements enabled by default on Supabase?

Yes, pg_stat_statements is pre-installed and enabled on all Supabase projects. You can query it directly from the SQL Editor without any configuration.

How often should I check performance metrics?

For production applications, set up automated alerts via log drains. For development, review the performance advisor and slow queries weekly, and always check after deploying new features that change query patterns.

What is a good cache hit ratio for Supabase?

A cache hit ratio above 99% is healthy. If it drops below 95%, your working set of data exceeds the available RAM on your plan. Consider upgrading your compute add-on or optimizing queries to access less data.

Do RLS policies affect performance?

Yes. Poorly written RLS policies can cause significant slowdowns, especially if they contain joins or call functions per-row. Wrap auth.uid() in a SELECT subquery for per-statement caching, add indexes on columns used in policies, and avoid joins inside policy expressions.

Can I monitor Edge Function performance separately?

Yes. Go to Dashboard → Edge Functions → select a function → Logs. You can see invocation count, execution time, error rates, and cold start frequency. For detailed monitoring, add custom logging with console.log inside your functions.

Can RapidDev help optimize my Supabase database performance?

Yes. RapidDev can audit your database schema, query patterns, and RLS policies to identify performance bottlenecks, add proper indexes, and set up monitoring dashboards for ongoing observability.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.