Skip to main content
RapidDev - Software Development Agency

How to Build Recommendations engine with V0

Build a product recommendations engine with V0 using Next.js, Supabase with pgvector, and OpenAI embeddings for collaborative and content-based filtering. Features fast cosine similarity search, cold-start fallback strategies, cached recommendations, and an admin performance dashboard — all in about 2-4 hours.

What you'll build

  • Vector embedding pipeline that generates 1536-dimension embeddings for items via OpenAI API
  • Cosine similarity recommendations using pgvector's <=> operator for sub-second search
  • Collaborative filtering based on user interaction patterns (views, clicks, purchases, ratings)
  • Cold-start fallback using content-based filtering for new users with no interaction history
  • Recommendations cache with TTL expiration to avoid recomputing on every request
  • Admin dashboard with Recharts comparing algorithm performance across variants
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Advanced10 min read2-4 hoursV0 Premium or higherApril 2026RapidDev Engineering Team
TL;DR

Build a product recommendations engine with V0 using Next.js, Supabase with pgvector, and OpenAI embeddings for collaborative and content-based filtering. Features fast cosine similarity search, cold-start fallback strategies, cached recommendations, and an admin performance dashboard — all in about 2-4 hours.

What you're building

Personalized recommendations drive engagement and revenue. Amazon attributes 35% of revenue to its recommendation engine. Whether you are building an e-commerce store, content platform, or learning app, showing users relevant items based on their behavior dramatically increases conversions.

V0 accelerates this build by generating the API routes for embedding generation and recommendation serving, the tracking system for user interactions, and the admin dashboard for monitoring algorithm performance. Supabase with pgvector handles the heavy lifting of vector similarity search.

The architecture uses Next.js App Router with an Edge-optimized recommendation endpoint, API routes for embedding generation and interaction tracking, pgvector for cosine similarity search, a recommendations cache for performance, and Recharts for algorithm comparison dashboards.

Final result

A hybrid recommendations engine combining collaborative and content-based filtering, with vector embeddings for item similarity, cached recommendation serving, and an admin dashboard for monitoring algorithm performance.

Tech stack

V0AI Code Generator
Next.jsFull-Stack Framework
Tailwind CSSStyling
shadcn/uiComponent Library
SupabaseDatabase
pgvectorVector Similarity Search
OpenAIEmbeddings API

Prerequisites

  • A V0 account (Premium recommended for prompt queuing)
  • A Supabase project with pgvector extension enabled (free tier works)
  • An OpenAI API key for generating embeddings (pay-as-you-go pricing)
  • A catalog of items to recommend (products, content, or any entity)

Build steps

1

Set up the project and vector database schema

Open V0 and create a new project. Use the Connect panel to add Supabase. Enable pgvector extension and create the schema for items with embeddings, user interactions, preferences, and recommendation cache.

prompt.txt
1// Paste this prompt into V0's AI chat:
2// Build a recommendations engine. Enable pgvector extension in Supabase and create schema:
3// 1. items: id (uuid PK), name (text), description (text), category (text), metadata (jsonb), embedding (vector(1536)), created_at (timestamptz)
4// 2. user_interactions: id (uuid PK), user_id (uuid FK), item_id (uuid FK), interaction_type (text check view/click/purchase/rating), rating (integer check 1-5 nullable), created_at (timestamptz) with index on (user_id, interaction_type)
5// 3. user_preferences: id (uuid PK), user_id (uuid FK unique), preferred_categories (text[]), interaction_weights (jsonb), updated_at (timestamptz)
6// 4. recommendations_cache: id (uuid PK), user_id (uuid FK), item_ids (uuid[]), algorithm (text), score (numeric[]), generated_at (timestamptz), expires_at (timestamptz)
7// Create IVFFlat index: CREATE INDEX ON items USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
8// RLS: users can read all items, read/write their own interactions and preferences.
9// Generate SQL migration and TypeScript types.

Expected result: Supabase is connected with pgvector enabled. All four tables are created with an IVFFlat index on the embedding column for fast similarity search.

2

Build the embedding generation pipeline

Create an API route that generates vector embeddings for item descriptions using the OpenAI Embeddings API. This converts text descriptions into 1536-dimension vectors stored in the items table for similarity search.

app/api/embeddings/generate/route.ts
1import { NextRequest, NextResponse } from 'next/server'
2import { createClient } from '@supabase/supabase-js'
3
4const supabase = createClient(
5 process.env.SUPABASE_URL!,
6 process.env.SUPABASE_SERVICE_ROLE_KEY!
7)
8
9export async function POST(req: NextRequest) {
10 const { item_id } = await req.json()
11
12 const { data: item } = await supabase
13 .from('items')
14 .select('name, description, category, metadata')
15 .eq('id', item_id)
16 .single()
17
18 if (!item) {
19 return NextResponse.json({ error: 'Item not found' }, { status: 404 })
20 }
21
22 const textToEmbed = `${item.name}. ${item.description}. Category: ${item.category}.`
23
24 const embeddingRes = await fetch('https://api.openai.com/v1/embeddings', {
25 method: 'POST',
26 headers: {
27 Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
28 'Content-Type': 'application/json',
29 },
30 body: JSON.stringify({
31 model: 'text-embedding-3-small',
32 input: textToEmbed,
33 }),
34 })
35
36 const embeddingData = await embeddingRes.json()
37 const embedding = embeddingData.data[0].embedding
38
39 await supabase
40 .from('items')
41 .update({ embedding })
42 .eq('id', item_id)
43
44 return NextResponse.json({ success: true, dimensions: embedding.length })
45}

Pro tip: Store OPENAI_API_KEY in the Vars tab without NEXT_PUBLIC_ prefix — it is a secret key that must only be used server-side in API routes.

Expected result: POST /api/embeddings/generate with an item_id generates a 1536-dimension embedding via OpenAI and stores it in the items table for similarity search.

3

Create the recommendation generation endpoint

Build the core recommendation API that uses pgvector cosine similarity for content-based filtering and user interaction patterns for collaborative filtering. Includes cold-start fallback for new users.

app/api/recommendations/generate/route.ts
1import { NextRequest, NextResponse } from 'next/server'
2import { createClient } from '@supabase/supabase-js'
3
4export const runtime = 'edge'
5
6export async function GET(req: NextRequest) {
7 const supabase = createClient(
8 process.env.SUPABASE_URL!,
9 process.env.SUPABASE_SERVICE_ROLE_KEY!
10 )
11
12 const userId = req.nextUrl.searchParams.get('user_id')
13 const limit = parseInt(req.nextUrl.searchParams.get('limit') ?? '10')
14
15 if (!userId) {
16 return NextResponse.json({ error: 'user_id required' }, { status: 400 })
17 }
18
19 // Check cache first
20 const { data: cached } = await supabase
21 .from('recommendations_cache')
22 .select('item_ids, algorithm')
23 .eq('user_id', userId)
24 .gt('expires_at', new Date().toISOString())
25 .order('generated_at', { ascending: false })
26 .limit(1)
27 .single()
28
29 if (cached) {
30 const { data: items } = await supabase
31 .from('items')
32 .select('id, name, description, category, metadata')
33 .in('id', cached.item_ids.slice(0, limit))
34
35 return NextResponse.json({ items, algorithm: cached.algorithm, cached: true })
36 }
37
38 // Get user's interaction history
39 const { data: interactions } = await supabase
40 .from('user_interactions')
41 .select('item_id, interaction_type, rating')
42 .eq('user_id', userId)
43 .order('created_at', { ascending: false })
44 .limit(50)
45
46 const interactedIds = interactions?.map((i) => i.item_id) ?? []
47
48 let algorithm = 'content-based'
49 let recommendedIds: string[] = []
50
51 if (interactedIds.length >= 5) {
52 // Collaborative: find similar users and recommend their items
53 algorithm = 'collaborative'
54 const { data: recs } = await supabase.rpc('collaborative_recommendations', {
55 target_user_id: userId,
56 exclude_ids: interactedIds,
57 rec_limit: limit,
58 })
59 recommendedIds = recs?.map((r: { item_id: string }) => r.item_id) ?? []
60 }
61
62 if (recommendedIds.length < limit) {
63 // Content-based fallback: find similar items to user's favorites
64 const topItemId = interactedIds[0] ?? null
65 if (topItemId) {
66 const { data: similar } = await supabase.rpc('similar_items', {
67 target_item_id: topItemId,
68 exclude_ids: [...interactedIds, ...recommendedIds],
69 match_count: limit - recommendedIds.length,
70 })
71 recommendedIds.push(...(similar?.map((s: { id: string }) => s.id) ?? []))
72 algorithm = interactedIds.length >= 5 ? 'hybrid' : 'content-based'
73 }
74 }
75
76 // Cache results
77 if (recommendedIds.length > 0) {
78 await supabase.from('recommendations_cache').insert({
79 user_id: userId,
80 item_ids: recommendedIds,
81 algorithm,
82 expires_at: new Date(Date.now() + 3600000).toISOString(),
83 })
84 }
85
86 const { data: items } = await supabase
87 .from('items')
88 .select('id, name, description, category, metadata')
89 .in('id', recommendedIds)
90
91 return NextResponse.json({ items, algorithm, cached: false })
92}

Expected result: GET /api/recommendations/generate?user_id=xxx returns personalized recommendations using collaborative filtering for active users and content-based fallback for new users, with caching.

4

Build the interaction tracking Server Action

Create a Server Action that records user interactions (views, clicks, purchases, ratings) on items. This data feeds both the collaborative filtering algorithm and the content-based similarity recommendations.

app/actions/recommendations.ts
1'use server'
2
3import { createClient } from '@/lib/supabase/server'
4import { revalidatePath } from 'next/cache'
5
6export async function trackInteraction(input: {
7 user_id: string
8 item_id: string
9 interaction_type: 'view' | 'click' | 'purchase' | 'rating'
10 rating?: number
11}) {
12 const supabase = await createClient()
13
14 await supabase.from('user_interactions').insert({
15 user_id: input.user_id,
16 item_id: input.item_id,
17 interaction_type: input.interaction_type,
18 rating: input.rating ?? null,
19 })
20
21 // Invalidate recommendations cache for this user
22 await supabase
23 .from('recommendations_cache')
24 .delete()
25 .eq('user_id', input.user_id)
26
27 revalidatePath('/')
28}

Expected result: Every user interaction is logged and the recommendation cache for that user is invalidated, ensuring fresh recommendations on next request.

5

Create the Supabase RPC functions for similarity search

Build the PostgreSQL functions that power the recommendation algorithms. The similar_items function uses pgvector cosine distance, and collaborative_recommendations finds items liked by similar users.

prompt.txt
1// Paste this prompt into V0's AI chat:
2// Create Supabase SQL migration with these functions:
3// 1. similar_items(target_item_id uuid, exclude_ids uuid[], match_count int):
4// SELECT id, name, description, category, 1 - (embedding <=> (SELECT embedding FROM items WHERE id = target_item_id)) as similarity
5// FROM items
6// WHERE id != target_item_id AND id != ALL(exclude_ids) AND embedding IS NOT NULL
7// ORDER BY embedding <=> (SELECT embedding FROM items WHERE id = target_item_id)
8// LIMIT match_count;
9// 2. collaborative_recommendations(target_user_id uuid, exclude_ids uuid[], rec_limit int):
10// Find users who interacted with same items as target user,
11// then return items those similar users interacted with but target user has not,
12// ranked by number of similar-user interactions.
13// 3. Ensure the IVFFlat index exists on items.embedding column for fast vector search.
14// Also create the API route at app/api/recommendations/user/[userId]/route.ts that returns cached or fresh recommendations.

Pro tip: The IVFFlat index with 100 lists gives sub-second cosine similarity search for up to 1 million items. For larger catalogs, increase the lists parameter.

Expected result: Two Supabase RPC functions are created: similar_items for content-based filtering and collaborative_recommendations for collaborative filtering. Both are optimized with the IVFFlat index.

Complete code

app/api/recommendations/generate/route.ts
1import { NextRequest, NextResponse } from 'next/server'
2import { createClient } from '@supabase/supabase-js'
3
4export const runtime = 'edge'
5
6export async function GET(req: NextRequest) {
7 const supabase = createClient(
8 process.env.SUPABASE_URL!,
9 process.env.SUPABASE_SERVICE_ROLE_KEY!
10 )
11
12 const userId = req.nextUrl.searchParams.get('user_id')
13 const limit = parseInt(req.nextUrl.searchParams.get('limit') ?? '10')
14
15 if (!userId) {
16 return NextResponse.json({ error: 'user_id required' }, { status: 400 })
17 }
18
19 const { data: cached } = await supabase
20 .from('recommendations_cache')
21 .select('item_ids, algorithm')
22 .eq('user_id', userId)
23 .gt('expires_at', new Date().toISOString())
24 .limit(1)
25 .single()
26
27 if (cached) {
28 const { data: items } = await supabase
29 .from('items')
30 .select('id, name, description, category')
31 .in('id', cached.item_ids.slice(0, limit))
32 return NextResponse.json({ items, algorithm: cached.algorithm })
33 }
34
35 const { data: interactions } = await supabase
36 .from('user_interactions')
37 .select('item_id')
38 .eq('user_id', userId)
39 .limit(50)
40
41 const interactedIds = interactions?.map((i) => i.item_id) ?? []
42 const topItem = interactedIds[0]
43
44 if (!topItem) {
45 const { data: popular } = await supabase
46 .from('items')
47 .select('id, name, description, category')
48 .limit(limit)
49 return NextResponse.json({ items: popular, algorithm: 'popular' })
50 }
51
52 const { data: similar } = await supabase.rpc('similar_items', {
53 target_item_id: topItem,
54 exclude_ids: interactedIds,
55 match_count: limit,
56 })
57
58 return NextResponse.json({
59 items: similar,
60 algorithm: 'content-based',
61 })
62}

Customization ideas

Add real-time recommendation updates

Use Supabase Realtime to listen for new interactions and refresh the recommendation widget without page reload when users browse.

Build a recommendation explanation feature

Show users why each item was recommended — because you viewed X, because users like you also liked Y — using the interaction and similarity data.

Add diversity controls

Implement category-based diversification so recommendations don't cluster around a single category, ensuring users discover new types of items.

Integrate A/B testing for algorithms

Split users between collaborative and content-based algorithms, track click-through and conversion rates, and automatically favor the winning approach.

Common pitfalls

Pitfall: Exposing OPENAI_API_KEY with a NEXT_PUBLIC_ prefix

How to avoid: Store OPENAI_API_KEY in the Vars tab without any prefix. Only call the OpenAI API from API routes and Server Actions, never from client components.

Pitfall: Not creating the IVFFlat index on the embedding column

How to avoid: Create the index: CREATE INDEX ON items USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100). This enables sub-second search for up to 1 million items.

Pitfall: Computing recommendations on every page load

How to avoid: Cache recommendations in the recommendations_cache table with a TTL (e.g., 1 hour). Check cache first and only recompute when expired or invalidated by new interactions.

Best practices

  • Use Edge Runtime for the recommendation endpoint to minimize latency for users worldwide
  • Cache recommendations with a 1-hour TTL and invalidate when new interactions are tracked
  • Store OPENAI_API_KEY and SUPABASE_SERVICE_ROLE_KEY in Vars tab without NEXT_PUBLIC_ prefix — server-only
  • Use the IVFFlat index with vector_cosine_ops for sub-second similarity search at scale
  • Implement a hybrid approach: collaborative filtering for active users, content-based for new users, popular items for anonymous visitors
  • Use V0's prompt queuing to generate the embedding pipeline, recommendation API, and admin dashboard in sequence
  • Track all interaction types (view/click/purchase/rating) to build a rich user preference profile

AI prompts to try

Copy these prompts to build this project faster.

ChatGPT Prompt

I'm building a recommendations engine with Next.js and Supabase (pgvector). Write two PostgreSQL functions: 1) similar_items(target_item_id uuid, exclude_ids uuid[], match_count int) that uses cosine distance (<=>) to find the most similar items by embedding. 2) collaborative_recommendations(target_user_id uuid, exclude_ids uuid[], rec_limit int) that finds users with similar interaction patterns and recommends items those users liked but the target user hasn't seen. Include the IVFFlat index creation.

Build Prompt

Create a 'You might also like' recommendation widget component. Accept a current item ID and user ID. Fetch recommendations from /api/recommendations/generate. Display as a horizontal Carousel of shadcn/ui Cards showing item name, description preview, and category Badge. Include a Skeleton loading state. Track 'view' interactions when items come into viewport using IntersectionObserver in useEffect.

Frequently asked questions

What V0 plan do I need for a recommendations engine?

V0 Free works for the basic build, but Premium ($20/month) is recommended because the engine has multiple complex components (embedding pipeline, recommendation API, admin dashboard) that benefit from prompt queuing.

How much does the OpenAI Embeddings API cost?

text-embedding-3-small costs $0.02 per 1 million tokens (about $0.00002 per item description). Embeddings only need to be generated once per item, making the cost negligible even for large catalogs.

What is the cold-start problem and how is it solved?

New users have no interaction history, so collaborative filtering cannot work. The solution is a hybrid approach: fall back to content-based filtering (pgvector similarity on item embeddings) until the user has at least 5 interactions, then switch to collaborative filtering.

How many items can pgvector handle?

With the IVFFlat index (lists=100), pgvector handles up to 1 million items with sub-second similarity search on Supabase free tier. For larger catalogs, increase the lists parameter and consider Supabase Pro for more memory.

How do I deploy the recommendations engine?

Click Share then Publish to Production in V0. Set OPENAI_API_KEY and SUPABASE_SERVICE_ROLE_KEY in the Vars tab without NEXT_PUBLIC_ prefix. The recommendation endpoint uses Edge Runtime for global low-latency serving.

Can RapidDev help build a custom recommendations engine?

Yes. RapidDev has built 600+ apps including ML-powered recommendation systems for e-commerce, content platforms, and SaaS products. Book a free consultation to discuss your specific recommendation needs.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help building your app?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.