Skip to main content
RapidDev - Software Development Agency

How to Build a Data Visualizations Tools with Replit

Build a Tableau-style interactive chart builder in Replit in 1-2 hours. Upload CSV files, define datasets, create line, bar, pie, scatter, and radar charts with drag-and-drop axis configuration, and arrange charts into shareable public dashboards. Uses Express, PostgreSQL with Drizzle ORM, and Recharts for rendering.

What you'll build

  • CSV upload endpoint that auto-detects column types (string, number, date) and stores rows in PostgreSQL
  • Dynamic query engine that translates chart config (axis, aggregate, filters) into safe SQL using Drizzle
  • REST API for datasets, charts, and dashboards with user isolation via Replit Auth
  • Chart builder UI with dataset selector, chart type picker, and axis dropdowns populated from column definitions
  • Grid-layout dashboard builder where saved charts can be positioned and resized
  • Public dashboard sharing via unique share codes with no-auth access
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate14 min read1-2 hoursReplit FreeApril 2026RapidDev Engineering Team
TL;DR

Build a Tableau-style interactive chart builder in Replit in 1-2 hours. Upload CSV files, define datasets, create line, bar, pie, scatter, and radar charts with drag-and-drop axis configuration, and arrange charts into shareable public dashboards. Uses Express, PostgreSQL with Drizzle ORM, and Recharts for rendering.

What you're building

Most teams need to visualize data but can't afford Tableau or don't want to share spreadsheets with BI vendors. This project is a self-hosted chart builder where users upload CSV files and create interactive charts without writing SQL. The result is a Metabase-lite that lives entirely in your Replit account.

Replit Agent generates the entire Express backend in one prompt — file upload with Multer, CSV parsing with csv-parse, column type detection, and a dynamic query engine that translates chart configurations into parameterized SQL through Drizzle. No raw string concatenation, no SQL injection risk.

The architecture is straightforward: datasets live in two tables (metadata + data_rows), charts store their config as JSONB, and dashboards store chart positions as a JSONB grid layout. The query engine runs server-side aggregations so large datasets never overwhelm the browser. Deploy on Autoscale — analytics dashboards are used intermittently and scale-to-zero keeps costs at zero when idle.

Final result

A working chart builder where users upload CSVs, configure line/bar/pie/scatter/radar charts, and arrange them into shareable dashboards — all running on Replit's built-in PostgreSQL.

Tech stack

ReplitIDE & Hosting
ExpressBackend Framework
PostgreSQLDatabase
Drizzle ORMDatabase ORM
Replit AuthAuth
MulterFile Upload Middleware
csv-parseCSV Parsing

Prerequisites

  • A Replit account (free tier is sufficient)
  • Basic understanding of what CSV files and charts are (no coding experience needed)
  • Sample CSV data to test with (any spreadsheet export works)
  • No external API keys required for the core build

Build steps

1

Set up the Express project with Replit Agent

Open a new Replit and use the Agent to scaffold the full backend. This one prompt generates the Express server, Drizzle schema, and all API routes so you can focus on customization rather than boilerplate.

prompt.txt
1// Prompt to type into Replit Agent:
2// Build a data visualization tool with Express and PostgreSQL using Drizzle ORM.
3// Create these tables in shared/schema.ts:
4// - datasets: id serial pk, user_id text, name text, source_type text default 'csv',
5// row_count integer, column_definitions jsonb, data_preview jsonb, created_at timestamp
6// - data_rows: id serial pk, dataset_id integer references datasets, row_data jsonb not null
7// - charts: id serial pk, user_id text, dataset_id integer references datasets,
8// name text, chart_type text (line/bar/area/pie/scatter/radar),
9// config jsonb (x_axis, y_axis, group_by, aggregate, filters, color_scheme), created_at timestamp
10// - dashboards: id serial pk, user_id text, name text, charts jsonb,
11// is_public boolean default false, share_code text unique, created_at timestamp
12// Set up Replit Auth middleware. Install multer and csv-parse for file handling.
13// Create server/index.js with all routes and start the server on 0.0.0.0.

Pro tip: Replit Agent works best with a single detailed prompt. Include the full schema and all route names in one message rather than multiple follow-up prompts — it produces more consistent code.

Expected result: Agent creates the full project structure: shared/schema.ts with all tables, server/index.js with route stubs, and installs multer and csv-parse automatically.

2

Build the CSV upload and column detection endpoint

The upload endpoint is the foundation of the app. It parses the CSV, detects column types by sampling values, and stores everything in PostgreSQL. This is where Replit's built-in file handling shines.

server/routes/datasets.js
1const multer = require('multer');
2const { parse } = require('csv-parse/sync');
3const { db } = require('../db');
4const { datasets, dataRows } = require('../../shared/schema');
5
6const upload = multer({ storage: multer.memoryStorage(), limits: { fileSize: 10 * 1024 * 1024 } });
7
8function detectType(values) {
9 const sample = values.filter(Boolean).slice(0, 20);
10 if (sample.every(v => !isNaN(Number(v)))) return 'number';
11 if (sample.every(v => !isNaN(Date.parse(v)))) return 'date';
12 return 'string';
13}
14
15router.post('/api/datasets/upload', upload.single('file'), async (req, res) => {
16 try {
17 const rows = parse(req.file.buffer.toString(), { columns: true, skip_empty_lines: true });
18 if (rows.length === 0) return res.status(400).json({ error: 'CSV is empty' });
19
20 const columnNames = Object.keys(rows[0]);
21 const columnDefinitions = columnNames.map(name => ({
22 name,
23 type: detectType(rows.map(r => r[name]))
24 }));
25
26 const [dataset] = await db.insert(datasets).values({
27 userId: req.user.id,
28 name: req.file.originalname.replace('.csv', ''),
29 sourceType: 'csv',
30 rowCount: rows.length,
31 columnDefinitions,
32 dataPreview: rows.slice(0, 10)
33 }).returning();
34
35 // Batch insert in groups of 500
36 for (let i = 0; i < rows.length; i += 500) {
37 const batch = rows.slice(i, i + 500).map(row => ({ datasetId: dataset.id, rowData: row }));
38 await db.insert(dataRows).values(batch);
39 }
40
41 res.json({ id: dataset.id, name: dataset.name, rowCount: dataset.rowCount, columnDefinitions });
42 } catch (err) {
43 res.status(500).json({ error: err.message });
44 }
45});

Pro tip: The 500-row batch limit prevents PostgreSQL from timing out on large files. For very large CSVs (50K+ rows), enforce a row count limit on upload and tell users to split their data.

Expected result: Uploading a CSV returns a dataset object with detected column types. Check Drizzle Studio (the database icon in the Replit sidebar) to see rows inserted into data_rows.

3

Build the dynamic query engine

The query engine translates a chart's JSON config (axis selection, aggregate, filters) into a safe SQL query using Drizzle's query builder. This is the core of the app — it must handle GROUP BY aggregations without SQL injection.

server/lib/queryEngine.js
1const { db } = require('../db');
2const { sql, eq, and, gte, lte, like } = require('drizzle-orm');
3const { dataRows } = require('../../shared/schema');
4
5const AGGREGATES = { sum: 'SUM', count: 'COUNT', avg: 'AVG', min: 'MIN', max: 'MAX' };
6
7async function executeChartQuery(datasetId, config) {
8 const { xAxis, yAxis, groupBy, aggregate, filters = [] } = config;
9 const agg = AGGREGATES[aggregate] || 'SUM';
10
11 // Build safe parameterized query via Drizzle sql template
12 const conditions = [eq(dataRows.datasetId, datasetId)];
13 for (const f of filters) {
14 if (f.operator === 'eq') conditions.push(sql`row_data->>${f.column} = ${f.value}`);
15 if (f.operator === 'gte') conditions.push(sql`(row_data->>${f.column})::numeric >= ${Number(f.value)}`);
16 if (f.operator === 'lte') conditions.push(sql`(row_data->>${f.column})::numeric <= ${Number(f.value)}`);
17 if (f.operator === 'like') conditions.push(sql`row_data->>${f.column} ILIKE ${'%' + f.value + '%'}`);
18 }
19
20 const groupCol = groupBy || xAxis;
21 const result = await db.execute(
22 sql`SELECT
23 row_data->>${sql.raw(`'${groupCol}'`)} AS label,
24 ${sql.raw(agg)}((row_data->>${sql.raw(`'${yAxis}'`)})::numeric) AS value
25 FROM data_rows
26 WHERE ${and(...conditions)}
27 GROUP BY row_data->>${sql.raw(`'${groupCol}'`)}
28 ORDER BY value DESC
29 LIMIT 500`
30 );
31
32 return result.rows;
33}
34
35module.exports = { executeChartQuery };

Pro tip: Never build SQL strings with user-provided column names using plain string concatenation. The sql.raw() calls here are safe because column names come from the stored column_definitions (validated on upload), not from raw user input at query time.

Expected result: The query engine returns formatted {label, value} pairs ready for Recharts to consume. Test it in Drizzle Studio by checking that GROUP BY results match your expectations.

4

Create chart and dashboard CRUD routes

Charts save their configuration — not the data. Dashboards save chart positions as a grid layout. This keeps storage minimal and allows reconfiguring charts without re-uploading data.

prompt.txt
1// Prompt to type into Replit Agent:
2// Add these routes to server/routes/charts.js:
3// POST /api/charts — save chart config: {name, datasetId, chartType, config}
4// Config object: {xAxis, yAxis, groupBy, aggregate, filters, colorScheme}
5// Return the saved chart with id
6// GET /api/charts — list user's charts with dataset name joined
7// GET /api/charts/:id/data — execute the chart query using queryEngine.js,
8// return {chartType, config, data: [{label, value}]}
9// DELETE /api/charts/:id — delete chart and remove from any dashboards
10//
11// Add these routes to server/routes/dashboards.js:
12// POST /api/dashboards — create dashboard: {name, charts: [{chartId, x, y, w, h}]}
13// Generate share_code as 8-char alphanumeric using crypto.randomBytes(4).toString('hex')
14// GET /api/dashboards — list user's dashboards
15// PUT /api/dashboards/:id — update layout (charts array with positions)
16// PATCH /api/dashboards/:id/publish — toggle is_public boolean
17// GET /api/public/d/:shareCode — public route, no auth required,
18// return dashboard with all chart data pre-fetched via executeChartQuery

Expected result: Charts can be saved, listed, and their data fetched independently. Dashboards store the grid layout. The public share route works without authentication.

5

Build the React chart builder frontend

The frontend is the key UX: dataset selector, chart type icons, and axis dropdowns populated from the dataset's column_definitions. Recharts renders the preview in real time as users configure their chart.

prompt.txt
1// Prompt to type into Replit Agent:
2// Build a React chart builder at client/src/pages/ChartBuilder.jsx:
3// 1. Dataset selector dropdown — fetches GET /api/datasets on mount
4// 2. When a dataset is selected, fetch its column_definitions and populate axis dropdowns
5// 3. Chart type picker — icon buttons for: line, bar, area, pie, scatter, radar
6// Use simple SVG icons or emoji labels (e.g. 📈 Line, 📊 Bar)
7// 4. Axis configuration panel:
8// - X Axis: dropdown of all column names
9// - Y Axis: dropdown of numeric columns only (filter by type === 'number')
10// - Aggregate: select (sum, count, avg, min, max)
11// - Group By: optional dropdown
12// - Color Scheme: input for a color hex string
13// 5. Preview button — calls GET /api/charts/:id/data or POST /api/charts/preview
14// Render the result with recharts:
15// - line/area → LineChart/AreaChart with XAxis, YAxis, Tooltip, Line/Area
16// - bar → BarChart with Bar
17// - pie → PieChart with Pie and Cell per segment
18// - scatter → ScatterChart with Scatter
19// - radar → RadarChart with Radar and PolarGrid
20// 6. Save button — POST /api/charts with name input, redirect to dashboard builder

Pro tip: Install recharts via the Replit package manager: click the Packages icon in the sidebar and search for 'recharts'. Replit installs it without touching the terminal.

Expected result: The chart builder shows a live preview when axes are configured. Changing chart type re-renders the same data in the new format.

6

Deploy on Autoscale and test the full flow

Before deploying, add all environment variables to the Secrets panel. Then deploy using Autoscale — the app scales to zero when idle, which is ideal for intermittent analytics usage.

prompt.txt
1// Prompt to type into Replit Agent:
2// Add a PostgreSQL retry wrapper to server/db.js:
3// const { drizzle } = require('drizzle-orm/node-postgres');
4// const { Pool } = require('pg');
5// const pool = new Pool({
6// connectionString: process.env.DATABASE_URL,
7// max: 5,
8// idleTimeoutMillis: 30000,
9// connectionTimeoutMillis: 5000
10// });
11// pool.on('error', (err) => console.error('DB pool error', err));
12// module.exports.db = drizzle(pool);
13//
14// Also ensure server/index.js binds to 0.0.0.0 not localhost:
15// app.listen(process.env.PORT || 3000, '0.0.0.0', ...)
16//
17// Open the Secrets panel (lock icon in sidebar) and add:
18// DATABASE_URL — already set by Replit's built-in PostgreSQL
19// SESSION_SECRET — any random 32-character string
20// Then click Deploy → Autoscale in the top-right menu.

Pro tip: Replit's built-in PostgreSQL goes to sleep after 5 minutes of idle. The Pool config above with connectionTimeoutMillis ensures the first query after sleep retries automatically rather than crashing.

Expected result: The app is live at your Replit deployment URL. Upload a CSV, create a chart, build a dashboard, and copy the public share link to verify it works without login.

Complete code

server/lib/queryEngine.js
1const { db } = require('../db');
2const { sql, eq, and } = require('drizzle-orm');
3const { dataRows } = require('../../shared/schema');
4
5const AGGREGATES = { sum: 'SUM', count: 'COUNT', avg: 'AVG', min: 'MIN', max: 'MAX' };
6
7async function executeChartQuery(datasetId, config) {
8 const { xAxis, yAxis, groupBy, aggregate = 'sum', filters = [], limit = 500 } = config;
9 const agg = AGGREGATES[aggregate] || 'SUM';
10 const groupCol = groupBy || xAxis;
11
12 const conditions = [eq(dataRows.datasetId, datasetId)];
13
14 for (const f of filters) {
15 switch (f.operator) {
16 case 'eq':
17 conditions.push(sql`row_data->>${f.column} = ${f.value}`);
18 break;
19 case 'gte':
20 conditions.push(sql`(row_data->>${f.column})::numeric >= ${Number(f.value)}`);
21 break;
22 case 'lte':
23 conditions.push(sql`(row_data->>${f.column})::numeric <= ${Number(f.value)}`);
24 break;
25 case 'like':
26 conditions.push(sql`row_data->>${f.column} ILIKE ${'%' + f.value + '%'}`);
27 break;
28 }
29 }
30
31 const result = await db.execute(
32 sql`SELECT
33 row_data->>${sql.raw(`'${groupCol}'`)} AS label,
34 ${sql.raw(agg)}((row_data->>${sql.raw(`'${yAxis}'`)})::numeric) AS value
35 FROM data_rows
36 WHERE ${and(...conditions)}
37 GROUP BY row_data->>${sql.raw(`'${groupCol}'`)}
38 ORDER BY value DESC
39 LIMIT ${limit}`
40 );
41
42 return result.rows.map(r => ({ label: r.label, value: Number(r.value) }));
43}
44
45module.exports = { executeChartQuery };

Customization ideas

JSON and API data sources

Add a source_type='api' option where users paste a URL. The Express backend fetches the URL, detects if it returns a JSON array, and maps it to the same column_definitions format — no CSV upload required.

Scheduled dashboard snapshots

Add a Scheduled Deployment that runs nightly, queries each dashboard's charts, and stores a snapshot JSONB in a dashboard_snapshots table. Users can browse historical versions to see how metrics changed over time.

Chart annotations

Add an annotations table (chart_id, x_value, label, color). In the chart renderer, render vertical line markers at annotated x positions — useful for flagging events like product launches or outages on a time-series chart.

Export to CSV and PNG

Add an export button: CSV export streams the query results through the Express response with Content-Disposition header. PNG export uses html2canvas on the frontend to capture the Recharts SVG and trigger a browser download.

Common pitfalls

Pitfall: Sending all data_rows to the frontend and aggregating in JavaScript

How to avoid: Always aggregate on the PostgreSQL side in the query engine. The frontend only receives the final {label, value} pairs — typically under 500 rows.

Pitfall: Building SQL queries with string template literals using user column names

How to avoid: Column names used in sql.raw() must come from the stored column_definitions array (validated and sanitized on upload), never from raw query parameters.

Pitfall: Uploading the entire CSV in a single INSERT statement

How to avoid: Batch inserts in groups of 500 rows as shown in the upload route. This also prevents request timeouts on large files.

Pitfall: Forgetting to handle the PostgreSQL sleep reconnection on first query

How to avoid: Configure the Pool with connectionTimeoutMillis: 5000 and idleTimeoutMillis: 30000. The pool automatically reconnects on the next query after a sleep.

Best practices

  • Store chart configurations as JSONB — not data snapshots. Recalculate chart data on every view request so charts stay current when datasets are updated.
  • Cap dataset size at 100,000 rows on upload and enforce it with a row count check before inserting into data_rows. Replit's free PostgreSQL has a 10GB limit.
  • Use Replit Auth for user isolation — every dataset, chart, and dashboard query must include a WHERE user_id = req.user.id condition to prevent cross-user data access.
  • Add a compound index on data_rows (dataset_id) so GROUP BY queries scan only the relevant dataset rows, not the entire table.
  • Use Drizzle Studio (database icon in Replit sidebar) to inspect data_rows content and debug query issues without writing SQL manually.
  • For the public dashboard share, cache the full chart data response in the dashboards table (as a last_rendered_data JSONB field) and refresh it on a 1-hour timer to avoid recalculating on every public view.

AI prompts to try

Copy these prompts to build this project faster.

ChatGPT Prompt

I'm building a data visualization tool with Express and PostgreSQL on Replit. I store CSV data in a data_rows table where each row has a row_data JSONB column. I need a query engine that accepts a chart config object {xAxis, yAxis, aggregate, groupBy, filters} and returns aggregated {label, value} pairs. The column names in xAxis/yAxis come from a validated column_definitions array. Help me write a safe query using Drizzle ORM's sql template tag that does GROUP BY and aggregation on JSONB fields without SQL injection risk.

Build Prompt

Extend the data visualization app with a filter builder UI. Add a filter panel to the chart builder where users can add up to 5 filter rows, each with a column selector (from column_definitions), an operator dropdown (equals, greater than, less than, contains), and a value input. Store filters in the chart config JSONB. The query engine already handles the filters array — just build the UI that generates it.

Frequently asked questions

What size CSV files can I upload?

The upload endpoint is configured for 10MB file size limit, which typically covers around 50,000–100,000 rows depending on column count. For larger files, split your CSV before uploading. Replit's free PostgreSQL has a 10GB storage limit total.

Can I connect to an external database instead of uploading CSVs?

Yes. Add a source_type='postgres' option to the datasets table. The Express route accepts a table name from your external DB, connects using a connection string stored in Replit Secrets, and copies the rows into data_rows. Note that Replit has dynamic outbound IPs — your external DB must whitelist 0.0.0.0/0 with strong authentication.

Why does the query use sql.raw() for column names — isn't that unsafe?

The column names passed to sql.raw() come exclusively from the column_definitions stored in the datasets table, which were validated and sanitized during the CSV upload. They never come from raw user input at query time. User-supplied filter values use parameterized Drizzle expressions (not sql.raw) so they are fully safe.

Do I need a paid Replit plan for this?

No. The free Replit plan includes built-in PostgreSQL and Autoscale deployment. The only limitation is that the PostgreSQL database sleeps after 5 minutes of inactivity, which causes a brief cold-start delay on the first query. The retry wrapper in db.js handles this automatically.

Can dashboards be embedded in other websites?

Yes. The public dashboard share endpoint (GET /api/public/d/:shareCode) returns JSON data that can be consumed by any frontend. For iframe embedding, add a dedicated /embed/:shareCode HTML route that serves a minimal page with just the charts and no navigation.

How do I update a dataset without losing all my charts?

Upload the new CSV as a new dataset, then edit each chart and switch its dataset_id to the new one. If the column names are the same, the chart configs (xAxis, yAxis, etc.) will work without changes. Adding a 'latest version' pointer on the dataset record would automate this — a useful extension to build.

Can RapidDev help build a custom data visualization tool for my business?

Yes. RapidDev has built 600+ apps and can help you extend this into a production-grade analytics platform with custom data connectors, scheduled reports, and team collaboration features. Book a free consultation at rapidevelopers.com.

Should I use Autoscale or Reserved VM for deployment?

Autoscale is the right choice for a data visualization tool. Usage is intermittent — users log in, build charts, then leave. Autoscale scales to zero between sessions, which means you pay nothing when nobody is using the app. Use Reserved VM only if you add real-time features like live data feeds or WebSocket connections.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help building your app?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.