Skip to main content
RapidDev - Software Development Agency

How to Build a Messaging Platform with Replit

Build a channel-based messaging platform in Replit in 2-4 hours. Use Replit Agent to generate an Express + PostgreSQL app with workspace channels, real-time delivery via Server-Sent Events and PostgreSQL LISTEN/NOTIFY, thread replies, emoji reactions, and unread badge counts. Deploy on Reserved VM for always-on connections.

What you'll build

  • Workspace and channel system with public/private channels and direct messages
  • Real-time message delivery using Server-Sent Events and PostgreSQL LISTEN/NOTIFY triggers
  • Thread reply system with parent message tracking and collapsible sub-message display
  • Emoji reaction system with per-user per-message per-emoji uniqueness
  • Unread badge counts calculated from last_read_at timestamps per channel member
  • Workspace invite links with join flow for new members
  • Three-panel React layout: workspace sidebar, channel list with unread counts, message view
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Advanced14 min read2-4 hoursReplit Core or higherApril 2026RapidDev Engineering Team
TL;DR

Build a channel-based messaging platform in Replit in 2-4 hours. Use Replit Agent to generate an Express + PostgreSQL app with workspace channels, real-time delivery via Server-Sent Events and PostgreSQL LISTEN/NOTIFY, thread replies, emoji reactions, and unread badge counts. Deploy on Reserved VM for always-on connections.

What you're building

A channel-based messaging platform is Slack or Discord with your own branding and rules. Teams, communities, and customer-facing products all benefit from structured channels over a chaotic group chat. The core difference from a simple chat widget is structure: workspaces contain channels, channels have members and history, messages can have threads and reactions.

Replit Agent generates the full Express backend with the workspace/channel data model. The real-time delivery system uses Server-Sent Events (SSE) instead of WebSockets. SSE is simpler to implement and works reliably with Express, but it requires a persistent connection. That's why a messaging platform must run on Reserved VM — Autoscale's scale-to-zero would drop all active SSE connections.

The real-time engine uses PostgreSQL's LISTEN/NOTIFY feature: a trigger fires pg_notify when a message is inserted, and the SSE handler forwards that notification to all connected clients subscribed to that channel. Unread counts are calculated from a last_read_at timestamp on the channel_members table — when a user opens a channel, you update that timestamp and their unread count resets to zero.

Final result

A fully functional channel-based messaging platform with workspaces, public/private channels, real-time SSE delivery, thread replies, emoji reactions, and unread badge counts — deployed on Replit Reserved VM.

Tech stack

ReplitIDE & Hosting
ExpressBackend Framework
PostgreSQLDatabase
Drizzle ORMDatabase ORM
Server-Sent EventsReal-time Delivery
Replit AuthAuth

Prerequisites

  • A Replit Core account or higher (Reserved VM is required for SSE connections)
  • Basic understanding of what a WebSocket or persistent HTTP connection is (no coding required)
  • Optional: a list of channel names for your initial workspace
  • Optional: design reference for the three-panel chat layout (Slack's layout is a good reference)

Build steps

1

Scaffold the project with Replit Agent

Create a new Repl and use the Agent prompt below to generate the full messaging platform structure. This is a complex build — the Agent prompt is detailed to ensure the real-time architecture is correct from the start.

prompt.txt
1// Type this into Replit Agent:
2// Build a channel-based messaging platform with Express and PostgreSQL using Drizzle ORM.
3// Tables:
4// - workspaces: id serial pk, name text not null, owner_id text not null, created_at timestamp
5// - workspace_members: id serial pk, workspace_id integer FK workspaces, user_id text not null,
6// role text default 'member' (enum: owner/admin/member), display_name text,
7// avatar_url text, joined_at timestamp, unique(workspace_id, user_id)
8// - channels: id serial pk, workspace_id integer FK workspaces, name text not null,
9// type text default 'public' (enum: public/private/direct), description text,
10// created_by text not null, created_at timestamp
11// - channel_members: id serial pk, channel_id integer FK channels, user_id text not null,
12// last_read_at timestamp default now(), unique(channel_id, user_id)
13// - messages: id serial pk, channel_id integer FK channels, sender_id text not null,
14// content text not null, type text default 'text' (enum: text/image/file),
15// file_url text, parent_id integer FK messages (null for top-level),
16// edited_at timestamp, created_at timestamp default now()
17// - reactions: id serial pk, message_id integer FK messages, user_id text not null,
18// emoji text not null, unique(message_id, user_id, emoji)
19// Create a PostgreSQL trigger: after INSERT on messages, fire
20// pg_notify('new_message', row_to_json(NEW)::text).
21// Routes: POST /api/workspaces, POST /api/workspaces/:id/invite,
22// POST /api/workspaces/:id/join, GET /api/workspaces/:id/channels,
23// POST /api/channels, GET /api/channels/:id/messages,
24// POST /api/channels/:id/messages, PATCH /api/messages/:id,
25// DELETE /api/messages/:id, POST /api/messages/:id/reactions,
26// PATCH /api/channels/:id/read, GET /api/channels/:id/stream (SSE).
27// React frontend with 3-panel layout: workspace sidebar, channel list with unread badges,
28// message list with reactions. Use Replit Auth. Bind server to 0.0.0.0.

Pro tip: After Agent creates the schema, verify the LISTEN/NOTIFY trigger exists by running SELECT tgname FROM pg_trigger in the Replit SQL Editor. You should see the trigger on the messages table.

Expected result: A running Express app with all six tables, the NOTIFY trigger, and a three-panel React frontend. Opening two browser tabs shows the interface.

2

Build the SSE real-time message stream

The SSE endpoint holds the HTTP connection open and forwards PostgreSQL NOTIFY events to the client. Each client subscribes to a specific channel. When a message is inserted anywhere in that channel, the trigger fires pg_notify and all connected clients receive it instantly.

server/routes/stream.js
1const { Pool } = require('pg');
2
3// Dedicated connection for LISTEN/NOTIFY (cannot use the regular pool)
4const listenClient = new Pool({
5 connectionString: process.env.DATABASE_URL,
6 max: 1,
7});
8
9// Map of channel subscriptions: channelId -> Set of SSE response objects
10const subscriptions = new Map();
11
12// Initialize LISTEN once on startup
13async function initNotifyListener() {
14 const client = await listenClient.connect();
15 await client.query('LISTEN new_message');
16
17 client.on('notification', (msg) => {
18 try {
19 const message = JSON.parse(msg.payload);
20 const channelId = message.channel_id;
21 const subscribers = subscriptions.get(channelId);
22 if (subscribers) {
23 const data = `data: ${JSON.stringify(message)}\n\n`;
24 subscribers.forEach(res => {
25 try { res.write(data); } catch (e) { /* client disconnected */ }
26 });
27 }
28 } catch (err) {
29 console.error('NOTIFY parse error:', err.message);
30 }
31 });
32
33 client.on('error', (err) => {
34 console.error('LISTEN client error:', err.message);
35 // Reconnect after 5 seconds
36 setTimeout(initNotifyListener, 5000);
37 });
38}
39
40initNotifyListener();
41
42// GET /api/channels/:id/stream — SSE endpoint
43router.get('/:id/stream', async (req, res) => {
44 const channelId = parseInt(req.params.id);
45
46 res.writeHead(200, {
47 'Content-Type': 'text/event-stream',
48 'Cache-Control': 'no-cache',
49 'Connection': 'keep-alive',
50 'X-Accel-Buffering': 'no',
51 });
52 res.write('data: {"type":"connected"}\n\n');
53
54 // Add to subscriptions
55 if (!subscriptions.has(channelId)) subscriptions.set(channelId, new Set());
56 subscriptions.get(channelId).add(res);
57
58 // Cleanup on disconnect
59 req.on('close', () => {
60 subscriptions.get(channelId)?.delete(res);
61 if (subscriptions.get(channelId)?.size === 0) subscriptions.delete(channelId);
62 });
63});

Pro tip: LISTEN/NOTIFY requires a dedicated PostgreSQL connection that stays open permanently — you cannot use a connection pool for this. The listenClient pool with max: 1 ensures a single dedicated listening connection.

Expected result: Opening GET /api/channels/1/stream in a browser tab keeps the connection open. Inserting a message via POST /api/channels/1/messages immediately sends a data: event to all connected clients subscribed to channel 1.

3

Build message CRUD and the thread reply system

Messages support edit and delete for the sender. Thread replies use the parent_id column to link replies to their parent message. The message list route returns only top-level messages; thread replies are fetched separately.

server/routes/messages.js
1const { messages, reactions } = require('../../shared/schema');
2const { eq, isNull, and, desc } = require('drizzle-orm');
3
4// GET /api/channels/:id/messages — cursor-based pagination
5router.get('/:id/messages', async (req, res) => {
6 const channelId = parseInt(req.params.id);
7 const { before, limit = 50 } = req.query;
8
9 const conditions = [
10 eq(messages.channelId, channelId),
11 isNull(messages.parentId), // Top-level only — threads fetched separately
12 ];
13 if (before) conditions.push(sql`${messages.id} < ${parseInt(before)}`);
14
15 const rows = await db.select().from(messages)
16 .where(and(...conditions))
17 .orderBy(desc(messages.createdAt))
18 .limit(parseInt(limit));
19
20 // Get reaction counts for all messages
21 const messageIds = rows.map(m => m.id);
22 const reactionCounts = messageIds.length > 0 ? await db.execute(
23 sql`SELECT message_id, emoji, COUNT(*) as count FROM reactions
24 WHERE message_id = ANY(${messageIds}) GROUP BY message_id, emoji`
25 ) : { rows: [] };
26
27 // Attach reactions to messages
28 const withReactions = rows.map(msg => ({
29 ...msg,
30 reactions: reactionCounts.rows.filter(r => r.message_id === msg.id),
31 }));
32
33 res.json(withReactions.reverse()); // Chronological order for the UI
34});
35
36// PATCH /api/messages/:id — edit own message
37router.patch('/:id', async (req, res) => {
38 const senderId = req.user?.id;
39 const [updated] = await db.update(messages)
40 .set({ content: req.body.content, editedAt: new Date() })
41 .where(and(eq(messages.id, parseInt(req.params.id)), eq(messages.senderId, senderId)))
42 .returning();
43 if (!updated) return res.status(403).json({ error: 'Cannot edit this message' });
44 res.json(updated);
45});
46
47// POST /api/messages/:id/reactions — add or remove emoji
48router.post('/:id/reactions', async (req, res) => {
49 const userId = req.user?.id;
50 const messageId = parseInt(req.params.id);
51 const { emoji } = req.body;
52
53 const existing = await db.select().from(reactions)
54 .where(and(
55 eq(reactions.messageId, messageId),
56 eq(reactions.userId, userId),
57 eq(reactions.emoji, emoji)
58 ));
59
60 if (existing.length > 0) {
61 await db.delete(reactions).where(eq(reactions.id, existing[0].id));
62 return res.json({ added: false, emoji });
63 }
64
65 await db.insert(reactions).values({ messageId, userId, emoji });
66 res.json({ added: true, emoji });
67});

Expected result: GET /api/channels/1/messages returns the 50 most recent top-level messages with reaction arrays. Cursor-based pagination works by passing before=<lowest message id> from the current page.

4

Add unread counts and channel read tracking

Unread counts are calculated per channel member based on last_read_at. When a user opens a channel, update their last_read_at. The channel list response includes unread_count for each channel to power the badge UI.

server/routes/channels.js
1// PATCH /api/channels/:id/read — mark channel as read
2router.patch('/:id/read', async (req, res) => {
3 const userId = req.user?.id;
4 const channelId = parseInt(req.params.id);
5
6 await db.update(channelMembers)
7 .set({ lastReadAt: new Date() })
8 .where(and(
9 eq(channelMembers.channelId, channelId),
10 eq(channelMembers.userId, userId)
11 ));
12
13 res.json({ ok: true });
14});
15
16// GET /api/workspaces/:id/channels — channel list with unread counts
17router.get('/:workspaceId/channels', async (req, res) => {
18 const userId = req.user?.id;
19 const workspaceId = parseInt(req.params.workspaceId);
20
21 const channelsWithUnread = await db.execute(sql`
22 SELECT
23 c.id, c.name, c.type, c.description,
24 COALESCE(unread.count, 0) AS unread_count
25 FROM channels c
26 INNER JOIN channel_members cm ON cm.channel_id = c.id AND cm.user_id = ${userId}
27 LEFT JOIN LATERAL (
28 SELECT COUNT(*) as count
29 FROM messages m
30 WHERE m.channel_id = c.id
31 AND m.created_at > cm.last_read_at
32 AND m.sender_id != ${userId}
33 AND m.parent_id IS NULL
34 ) unread ON true
35 WHERE c.workspace_id = ${workspaceId}
36 ORDER BY c.type, c.name ASC
37 `);
38
39 res.json(channelsWithUnread.rows);
40});

Pro tip: Update last_read_at whenever the user sends a message to that channel as well — a sender obviously has 'read' their own message. This prevents the unread count incrementing on the sender's own messages.

Expected result: GET /api/workspaces/1/channels returns channels with unread_count badges. Opening a channel and calling PATCH /api/channels/:id/read resets that channel's unread_count to 0.

5

Deploy on Reserved VM

Messaging requires Reserved VM. SSE connections are long-lived — they must stay open while users are active. Autoscale drops connections when instances scale to zero. Reserved VM ($10-20/month) keeps the process always running.

server/index.js
1// server/index.js — complete setup for messaging platform
2const express = require('express');
3const path = require('path');
4const { requireAuth } = require('@replit/repl-auth');
5
6const channelsRouter = require('./routes/channels');
7const messagesRouter = require('./routes/messages');
8const workspacesRouter = require('./routes/workspaces');
9const streamRouter = require('./routes/stream');
10
11const app = express();
12app.use(express.json());
13app.use(requireAuth);
14
15// SSE stream routes (no JSON body parsing needed)
16app.use('/api/channels', streamRouter); // GET /:id/stream
17app.use('/api/channels', messagesRouter);
18app.use('/api/workspaces', workspacesRouter);
19app.use('/api/workspaces', channelsRouter);
20
21// Serve React frontend
22app.use(express.static(path.join(__dirname, '../client/dist')));
23app.get('*', (req, res) => {
24 res.sendFile(path.join(__dirname, '../client/dist/index.html'));
25});
26
27// IMPORTANT: bind to 0.0.0.0 for Replit
28const server = app.listen(5000, '0.0.0.0', () => {
29 console.log('Messaging platform running on port 5000');
30});
31
32// Keep SSE connections alive with longer timeout
33server.keepAliveTimeout = 120000;
34server.headersTimeout = 121000;

Pro tip: To deploy on Reserved VM: click Deploy → Reserved VM → select the smallest VM size ($10/month). SSE connections need a persistent process — Reserved VM provides always-on Node.js without the cold starts of Autoscale.

Expected result: The app runs on Reserved VM. Multiple browser tabs can open SSE connections to /api/channels/:id/stream simultaneously. Messages sent in one tab appear in the other tab within milliseconds.

Complete code

server/routes/stream.js
1const express = require('express');
2const { Pool } = require('pg');
3
4const router = express.Router();
5
6// Dedicated listening connection — cannot be pooled
7const listenPool = new Pool({ connectionString: process.env.DATABASE_URL, max: 1 });
8const subscriptions = new Map(); // channelId -> Set<res>
9
10async function initListener() {
11 const client = await listenPool.connect();
12 await client.query('LISTEN new_message');
13 console.log('PostgreSQL LISTEN active for new_message');
14
15 client.on('notification', (msg) => {
16 try {
17 const message = JSON.parse(msg.payload);
18 const subs = subscriptions.get(message.channel_id);
19 if (subs && subs.size > 0) {
20 const data = `data: ${JSON.stringify(message)}\n\n`;
21 subs.forEach(res => { try { res.write(data); } catch (e) {} });
22 }
23 } catch (e) { console.error('NOTIFY parse error:', e.message); }
24 });
25
26 client.on('error', async (err) => {
27 console.error('Listen client error:', err.message);
28 client.release();
29 setTimeout(initListener, 5000); // Reconnect after 5s
30 });
31}
32
33initListener().catch(console.error);
34
35// GET /api/channels/:id/stream
36router.get('/:id/stream', (req, res) => {
37 const channelId = parseInt(req.params.id);
38
39 res.writeHead(200, {
40 'Content-Type': 'text/event-stream',
41 'Cache-Control': 'no-cache',
42 'Connection': 'keep-alive',
43 'X-Accel-Buffering': 'no',
44 });
45
46 // Send initial heartbeat
47 res.write(`data: {"type":"connected","channelId":${channelId}}\n\n`);
48
49 // Keep-alive ping every 30s to prevent connection timeout
50 const pingInterval = setInterval(() => {
51 try { res.write(': ping\n\n'); } catch (e) { clearInterval(pingInterval); }
52 }, 30000);
53
54 if (!subscriptions.has(channelId)) subscriptions.set(channelId, new Set());
55 subscriptions.get(channelId).add(res);
56
57 req.on('close', () => {
58 clearInterval(pingInterval);
59 subscriptions.get(channelId)?.delete(res);
60 if (subscriptions.get(channelId)?.size === 0) subscriptions.delete(channelId);
61 });
62});
63
64module.exports = router;

Customization ideas

File and image sharing

Add a POST /api/channels/:id/upload route using Replit's object storage or S3-compatible service. Store the file URL in messages.file_url and set messages.type to 'image' or 'file'. The React frontend renders images inline and files as download links.

Direct messages

Create a channel with type = 'direct' when two users start a DM. Use the channel_members table with exactly two user_ids. The workspace sidebar shows DM conversations separately from public channels with the recipient's display name.

Message search

Add a GET /api/workspaces/:id/search?q= route with a PostgreSQL full-text search on messages.content using tsvector. Results show the message snippet, channel name, and sender avatar so users can jump to the message context.

Workspace notifications

Add a @mentions parser to the message send route. When a message contains @username, insert a notification row for that user. Add a GET /api/notifications/mentions route that returns unread mentions with the channel and message context.

Common pitfalls

Pitfall: Deploying on Autoscale instead of Reserved VM

How to avoid: Deploy on Reserved VM. The persistent Node.js process keeps all SSE connections active and the LISTEN/NOTIFY client connected to PostgreSQL continuously.

Pitfall: Using a pooled connection for PostgreSQL LISTEN

How to avoid: Create a dedicated Pool with max: 1 for the LISTEN connection. Acquire it once with listenPool.connect() and never release it back to the pool.

Pitfall: Loading all messages on channel open without pagination

How to avoid: Use cursor-based pagination: load the 50 most recent messages on open. When the user scrolls to the top, fetch the next 50 messages with before=<lowest message id>.

Pitfall: Not sending keep-alive pings on SSE connections

How to avoid: Use setInterval to send a SSE comment (: ping) every 30 seconds on each active SSE connection. SSE comments are ignored by the client but keep the connection alive.

Best practices

  • Deploy on Reserved VM ($10-20/month) — SSE connections and LISTEN/NOTIFY both require a persistent, always-on Node.js process.
  • Use a dedicated PostgreSQL connection for LISTEN/NOTIFY — never share it with the regular connection pool.
  • Send keep-alive SSE pings every 30 seconds to prevent connection timeouts from browsers and proxies.
  • Use cursor-based pagination for message history rather than page numbers — cursor pagination handles new messages arriving while the user reads without skipping or duplicating.
  • Update last_read_at when users both open a channel and send a message — this prevents the unread badge from counting the sender's own messages.
  • Store Replit Auth's user ID as sender_id in messages — this ensures edit and delete routes can verify ownership with a simple WHERE sender_id = req.user.id.
  • Use the LATERAL subquery for unread counts in the channel list endpoint — it calculates all channel unread counts in a single database round-trip.

AI prompts to try

Copy these prompts to build this project faster.

ChatGPT Prompt

I'm building a messaging platform with Express and PostgreSQL. I need to implement real-time message delivery without WebSockets. Help me write: (1) a PostgreSQL trigger function that fires pg_notify('new_message', row_to_json(NEW)::text) after every INSERT on the messages table, (2) a Node.js function that creates a dedicated PostgreSQL LISTEN connection using the pg package, subscribes to 'new_message', and on notification parses the JSON payload and sends it as a Server-Sent Events (SSE) data event to all clients subscribed to that channel_id.

Build Prompt

Add a thread reply view to the messaging platform. When a user clicks Reply on a message, show an expanded thread panel on the right side of the message view. Add GET /api/messages/:id/thread to fetch all replies (WHERE parent_id = :id ORDER BY created_at ASC). The SSE stream should also deliver thread replies — the trigger fires for all messages regardless of parent_id, so the React client filters incoming SSE messages: if message.parent_id === currentThreadId, add to thread view; otherwise add to main channel view.

Frequently asked questions

Why use Server-Sent Events instead of WebSockets?

SSE is simpler to implement with Express and works natively over HTTP without upgrading the connection. It's one-directional (server to client), which is exactly what messaging needs for delivery. Clients send messages via regular POST requests; only receiving requires a persistent connection. WebSockets add complexity without benefit for this use case.

What Replit plan do I need?

A paid plan (Core or higher) is required for Reserved VM deployment. Reserved VM ($10-20/month) is non-negotiable for a messaging platform — SSE connections and PostgreSQL LISTEN both require a persistent process that Autoscale cannot provide.

How does the PostgreSQL LISTEN/NOTIFY real-time delivery work?

A PostgreSQL trigger fires pg_notify('new_message', row_to_json(NEW)::text) after every message INSERT. A dedicated Express connection listens on that channel with LISTEN new_message. When the notification arrives, the Node.js pg client fires an event, and the code forwards the message JSON to all SSE clients subscribed to that channel_id.

How are unread message counts calculated?

The channel_members table stores a last_read_at timestamp per user per channel. The unread count query counts messages where created_at > last_read_at AND sender_id != current user. Opening a channel calls PATCH /api/channels/:id/read which updates last_read_at to now(), resetting the count to zero.

Can I add file sharing to the messages?

Yes. Add a POST /api/channels/:id/upload route using Replit's built-in object storage (replit.nix has the storage SDK). After upload, insert a message with type = 'file' or 'image' and the file URL in the file_url column. The React frontend renders images inline and file messages as download cards.

What happens if the LISTEN connection drops?

The pg client fires an error event. The error handler in initListener() releases the connection and calls setTimeout(initListener, 5000) to reconnect after 5 seconds. During the 5-second gap, any messages inserted will still be stored in the database — clients can refresh or poll to catch up.

Can RapidDev help build a custom messaging platform?

Yes. RapidDev has built 600+ apps including real-time communication tools. They can add file sharing, advanced notification systems, video call integrations, and custom workspace management for your specific use case. Book a free consultation at rapidevelopers.com.

How do I support multiple workspaces for a multi-tenant SaaS?

The schema already supports multiple workspaces. Each workspace has its own channels and members. Add a workspace selector in the sidebar. The workspace_members table controls access — users only see channels for workspaces they've joined. Invite links are workspace-scoped.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help building your app?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.