Skip to main content
RapidDev - Software Development Agency
supabase-tutorial

How to Import Data into Supabase

You can import data into Supabase in three ways: upload a CSV file directly through the Dashboard Table Editor, use the JavaScript client to bulk-insert JSON rows, or run psql COPY for large datasets. The Dashboard CSV import is the fastest way to get started. For production-scale imports, psql COPY handles millions of rows efficiently by streaming data directly into PostgreSQL.

What you'll learn

  • How to import CSV files through the Supabase Dashboard
  • How to bulk-insert data using the JavaScript client
  • How to use psql COPY for large-scale imports
  • How to handle RLS policies during data imports
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner7 min read10-15 minSupabase (all plans), @supabase/supabase-js v2+, psql 14+March 2026RapidDev Engineering Team
TL;DR

You can import data into Supabase in three ways: upload a CSV file directly through the Dashboard Table Editor, use the JavaScript client to bulk-insert JSON rows, or run psql COPY for large datasets. The Dashboard CSV import is the fastest way to get started. For production-scale imports, psql COPY handles millions of rows efficiently by streaming data directly into PostgreSQL.

Three Ways to Import Data into Your Supabase Database

Whether you are migrating from another database, loading spreadsheet data, or seeding a new project, Supabase offers multiple import methods. This tutorial covers the Dashboard CSV importer for quick uploads, the JS client insert method for programmatic imports, and the psql COPY command for high-volume data loading. Each method has different trade-offs for speed, convenience, and data size.

Prerequisites

  • A Supabase project with a target table created
  • Data in CSV or JSON format ready to import
  • For psql: the Supabase connection string from Dashboard → Settings → Database
  • For JS client: @supabase/supabase-js installed in your project

Step-by-step guide

1

Import a CSV file through the Dashboard

The simplest way to import data is through the Supabase Dashboard. Go to Table Editor, click the Import button (or create a new table from CSV). Select your CSV file and Supabase will auto-detect column types. You can map CSV headers to existing table columns or let Supabase create a new table with columns matching your CSV. This method works well for files up to a few thousand rows.

Expected result: Your CSV data appears in the Table Editor. Each row in the CSV becomes a row in the table.

2

Bulk-insert data with the JavaScript client

For programmatic imports, use the Supabase JS client to insert an array of objects. The insert method accepts an array and sends them in a single request. For datasets larger than 1,000 rows, split the data into batches to avoid request timeouts and payload size limits. RLS policies apply to these inserts, so the authenticated user must have permission to insert.

typescript
1import { createClient } from '@supabase/supabase-js'
2
3const supabase = createClient(
4 process.env.SUPABASE_URL!,
5 process.env.SUPABASE_SERVICE_ROLE_KEY! // Server-side only
6)
7
8// Single batch insert
9const rows = [
10 { name: 'Alice', email: 'alice@example.com', role: 'admin' },
11 { name: 'Bob', email: 'bob@example.com', role: 'user' },
12 { name: 'Carol', email: 'carol@example.com', role: 'user' },
13]
14
15const { data, error } = await supabase
16 .from('users')
17 .insert(rows)
18 .select()
19
20if (error) console.error('Insert failed:', error.message)
21
22// Batched insert for large datasets
23const BATCH_SIZE = 500
24for (let i = 0; i < largeDataset.length; i += BATCH_SIZE) {
25 const batch = largeDataset.slice(i, i + BATCH_SIZE)
26 const { error } = await supabase.from('users').insert(batch)
27 if (error) {
28 console.error(`Batch ${i / BATCH_SIZE} failed:`, error.message)
29 break
30 }
31}

Expected result: All rows are inserted into the table. The batched approach logs progress and stops on the first error.

3

Use psql COPY for large-scale imports

For importing thousands to millions of rows, psql COPY is the fastest method. It streams data directly into PostgreSQL without going through the REST API. Get your connection string from Dashboard → Settings → Database → Connection string (URI). Use the COPY command with your CSV file. This bypasses RLS entirely since you connect as the postgres role.

typescript
1# Get your connection string from Dashboard Settings Database
2# Format: postgresql://postgres.[ref]:[password]@[host]:5432/postgres
3
4# Import CSV with headers
5psql "postgresql://postgres.[ref]:[password]@db.[ref].supabase.co:5432/postgres" \
6 -c "\COPY public.users (name, email, role) FROM '/path/to/data.csv' WITH (FORMAT csv, HEADER true)"
7
8# For tab-delimited files
9psql "postgresql://postgres.[ref]:[password]@db.[ref].supabase.co:5432/postgres" \
10 -c "\COPY public.users FROM '/path/to/data.tsv' WITH (FORMAT csv, DELIMITER E'\t', HEADER true)"

Expected result: The CSV data is loaded directly into PostgreSQL. psql reports the number of rows copied.

4

Handle duplicate data with upsert

If your import might contain rows that already exist in the table, use upsert instead of insert. Upsert inserts new rows and updates existing ones based on a unique constraint. Specify the onConflict column to tell Supabase which column determines uniqueness. This is essential for re-running imports without creating duplicate entries.

typescript
1// Upsert: insert or update on conflict
2const { data, error } = await supabase
3 .from('users')
4 .upsert(
5 [
6 { email: 'alice@example.com', name: 'Alice Updated', role: 'admin' },
7 { email: 'dave@example.com', name: 'Dave', role: 'user' },
8 ],
9 { onConflict: 'email' }
10 )
11 .select()
12
13// The email column must have a UNIQUE constraint:
14// ALTER TABLE users ADD CONSTRAINT users_email_unique UNIQUE (email);

Expected result: Existing rows are updated with new values. New rows are inserted. No duplicate key errors occur.

5

Verify imported data and check row counts

After importing, verify the data landed correctly. Use the SQL Editor in the Dashboard to run a count query and spot-check a few rows. Check for NULL values in required columns and verify that foreign key relationships are intact. For large imports, compare the row count in the database against the source file line count.

typescript
1-- Check total rows imported
2SELECT count(*) FROM public.users;
3
4-- Check for NULLs in required columns
5SELECT count(*) FROM public.users WHERE name IS NULL OR email IS NULL;
6
7-- Spot-check first 10 rows
8SELECT * FROM public.users ORDER BY created_at DESC LIMIT 10;

Expected result: The row count matches your source data. No unexpected NULL values or missing rows.

Complete working example

import-data.ts
1import { createClient } from '@supabase/supabase-js'
2import { readFileSync } from 'fs'
3import { parse } from 'csv-parse/sync'
4
5// Use service role key for server-side imports (bypasses RLS)
6const supabase = createClient(
7 process.env.SUPABASE_URL!,
8 process.env.SUPABASE_SERVICE_ROLE_KEY!
9)
10
11interface UserRow {
12 name: string
13 email: string
14 role: string
15}
16
17async function importCSV(filePath: string, tableName: string) {
18 // 1. Read and parse CSV
19 const fileContent = readFileSync(filePath, 'utf-8')
20 const records: UserRow[] = parse(fileContent, {
21 columns: true,
22 skip_empty_lines: true,
23 trim: true,
24 })
25
26 console.log(`Parsed ${records.length} rows from ${filePath}`)
27
28 // 2. Insert in batches
29 const BATCH_SIZE = 500
30 let inserted = 0
31
32 for (let i = 0; i < records.length; i += BATCH_SIZE) {
33 const batch = records.slice(i, i + BATCH_SIZE)
34
35 const { error } = await supabase
36 .from(tableName)
37 .upsert(batch, { onConflict: 'email' })
38
39 if (error) {
40 console.error(`Batch ${Math.floor(i / BATCH_SIZE) + 1} failed:`, error.message)
41 break
42 }
43
44 inserted += batch.length
45 console.log(`Imported ${inserted} / ${records.length} rows`)
46 }
47
48 // 3. Verify
49 const { count } = await supabase
50 .from(tableName)
51 .select('*', { count: 'exact', head: true })
52
53 console.log(`Total rows in ${tableName}: ${count}`)
54}
55
56importCSV('./data/users.csv', 'users')

Common mistakes when importing Data into Supabase

Why it's a problem: Using the anon key for server-side import scripts, causing RLS to block inserts

How to avoid: Use SUPABASE_SERVICE_ROLE_KEY for server-side scripts that need to bypass RLS. The anon key respects RLS and requires matching policies for every insert.

Why it's a problem: Sending all rows in a single insert request, causing timeouts on large datasets

How to avoid: Split imports into batches of 500-1,000 rows. This avoids request payload limits and keeps each request within timeout bounds.

Why it's a problem: Using the pooler connection string (port 6543) for psql COPY operations

How to avoid: Use the direct connection string on port 5432. Connection pooling via Supavisor does not support the COPY protocol.

Best practices

  • Use the Dashboard CSV import for quick one-time uploads of small datasets (under 5,000 rows)
  • Use the service role key for server-side import scripts and never expose it in client code
  • Split large imports into batches of 500-1,000 rows to avoid timeouts and payload limits
  • Use upsert with onConflict to make imports idempotent and safe to re-run
  • Always verify row counts and check for NULL values after completing an import
  • Use psql COPY for imports exceeding 10,000 rows — it is orders of magnitude faster than the REST API
  • Disable triggers and indexes before massive imports, then re-enable them afterward to speed up the process

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I have a CSV file with 50,000 rows of user data (name, email, role columns). Walk me through the fastest way to import this into a Supabase table, handling duplicates by email and verifying the import was successful.

Supabase Prompt

Write a Node.js script that reads a CSV file, parses it, and imports the data into a Supabase table in batches of 500 rows using upsert. Use the service role key and log progress after each batch.

Frequently asked questions

What is the maximum CSV file size I can import through the Dashboard?

The Dashboard CSV importer handles files up to approximately 100MB. For larger files, use psql COPY which has no practical file size limit since it streams data directly to PostgreSQL.

Does the CSV import respect RLS policies?

The Dashboard CSV import runs as the postgres role, which bypasses RLS. The JS client insert respects RLS unless you use the service role key. psql COPY also bypasses RLS since it connects as the postgres user.

How do I import data with foreign key relationships?

Import the parent table first, then import the child table. Make sure the foreign key values in the child data match existing primary keys in the parent table. If they do not match, PostgreSQL will reject the rows with a foreign key violation error.

Can I import JSON data instead of CSV?

The Dashboard only supports CSV. For JSON data, use the JS client to parse the JSON array and pass it directly to the insert or upsert method. Each object in the array becomes a row.

How do I handle date formats in CSV imports?

PostgreSQL accepts ISO 8601 format (2024-01-15T10:30:00Z) natively. If your CSV uses a different format like MM/DD/YYYY, convert it to ISO 8601 before importing or use a staging table with text columns and transform with SQL.

Can RapidDev help with complex data migrations to Supabase?

Yes. RapidDev can plan and execute data migrations from any source into Supabase, including schema mapping, data transformation, handling foreign key relationships, and verifying data integrity.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.