Build a product recommendations engine with V0 using Next.js, Supabase with pgvector, and OpenAI embeddings for collaborative and content-based filtering. Features fast cosine similarity search, cold-start fallback strategies, cached recommendations, and an admin performance dashboard — all in about 2-4 hours.
What you're building
Personalized recommendations drive engagement and revenue. Amazon attributes 35% of revenue to its recommendation engine. Whether you are building an e-commerce store, content platform, or learning app, showing users relevant items based on their behavior dramatically increases conversions.
V0 accelerates this build by generating the API routes for embedding generation and recommendation serving, the tracking system for user interactions, and the admin dashboard for monitoring algorithm performance. Supabase with pgvector handles the heavy lifting of vector similarity search.
The architecture uses Next.js App Router with an Edge-optimized recommendation endpoint, API routes for embedding generation and interaction tracking, pgvector for cosine similarity search, a recommendations cache for performance, and Recharts for algorithm comparison dashboards.
Final result
A hybrid recommendations engine combining collaborative and content-based filtering, with vector embeddings for item similarity, cached recommendation serving, and an admin dashboard for monitoring algorithm performance.
Tech stack
Prerequisites
- A V0 account (Premium recommended for prompt queuing)
- A Supabase project with pgvector extension enabled (free tier works)
- An OpenAI API key for generating embeddings (pay-as-you-go pricing)
- A catalog of items to recommend (products, content, or any entity)
Build steps
Set up the project and vector database schema
Open V0 and create a new project. Use the Connect panel to add Supabase. Enable pgvector extension and create the schema for items with embeddings, user interactions, preferences, and recommendation cache.
1// Paste this prompt into V0's AI chat:2// Build a recommendations engine. Enable pgvector extension in Supabase and create schema:3// 1. items: id (uuid PK), name (text), description (text), category (text), metadata (jsonb), embedding (vector(1536)), created_at (timestamptz)4// 2. user_interactions: id (uuid PK), user_id (uuid FK), item_id (uuid FK), interaction_type (text check view/click/purchase/rating), rating (integer check 1-5 nullable), created_at (timestamptz) with index on (user_id, interaction_type)5// 3. user_preferences: id (uuid PK), user_id (uuid FK unique), preferred_categories (text[]), interaction_weights (jsonb), updated_at (timestamptz)6// 4. recommendations_cache: id (uuid PK), user_id (uuid FK), item_ids (uuid[]), algorithm (text), score (numeric[]), generated_at (timestamptz), expires_at (timestamptz)7// Create IVFFlat index: CREATE INDEX ON items USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);8// RLS: users can read all items, read/write their own interactions and preferences.9// Generate SQL migration and TypeScript types.Expected result: Supabase is connected with pgvector enabled. All four tables are created with an IVFFlat index on the embedding column for fast similarity search.
Build the embedding generation pipeline
Create an API route that generates vector embeddings for item descriptions using the OpenAI Embeddings API. This converts text descriptions into 1536-dimension vectors stored in the items table for similarity search.
1import { NextRequest, NextResponse } from 'next/server'2import { createClient } from '@supabase/supabase-js'34const supabase = createClient(5 process.env.SUPABASE_URL!,6 process.env.SUPABASE_SERVICE_ROLE_KEY!7)89export async function POST(req: NextRequest) {10 const { item_id } = await req.json()1112 const { data: item } = await supabase13 .from('items')14 .select('name, description, category, metadata')15 .eq('id', item_id)16 .single()1718 if (!item) {19 return NextResponse.json({ error: 'Item not found' }, { status: 404 })20 }2122 const textToEmbed = `${item.name}. ${item.description}. Category: ${item.category}.`2324 const embeddingRes = await fetch('https://api.openai.com/v1/embeddings', {25 method: 'POST',26 headers: {27 Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,28 'Content-Type': 'application/json',29 },30 body: JSON.stringify({31 model: 'text-embedding-3-small',32 input: textToEmbed,33 }),34 })3536 const embeddingData = await embeddingRes.json()37 const embedding = embeddingData.data[0].embedding3839 await supabase40 .from('items')41 .update({ embedding })42 .eq('id', item_id)4344 return NextResponse.json({ success: true, dimensions: embedding.length })45}Pro tip: Store OPENAI_API_KEY in the Vars tab without NEXT_PUBLIC_ prefix — it is a secret key that must only be used server-side in API routes.
Expected result: POST /api/embeddings/generate with an item_id generates a 1536-dimension embedding via OpenAI and stores it in the items table for similarity search.
Create the recommendation generation endpoint
Build the core recommendation API that uses pgvector cosine similarity for content-based filtering and user interaction patterns for collaborative filtering. Includes cold-start fallback for new users.
1import { NextRequest, NextResponse } from 'next/server'2import { createClient } from '@supabase/supabase-js'34export const runtime = 'edge'56export async function GET(req: NextRequest) {7 const supabase = createClient(8 process.env.SUPABASE_URL!,9 process.env.SUPABASE_SERVICE_ROLE_KEY!10 )1112 const userId = req.nextUrl.searchParams.get('user_id')13 const limit = parseInt(req.nextUrl.searchParams.get('limit') ?? '10')1415 if (!userId) {16 return NextResponse.json({ error: 'user_id required' }, { status: 400 })17 }1819 // Check cache first20 const { data: cached } = await supabase21 .from('recommendations_cache')22 .select('item_ids, algorithm')23 .eq('user_id', userId)24 .gt('expires_at', new Date().toISOString())25 .order('generated_at', { ascending: false })26 .limit(1)27 .single()2829 if (cached) {30 const { data: items } = await supabase31 .from('items')32 .select('id, name, description, category, metadata')33 .in('id', cached.item_ids.slice(0, limit))3435 return NextResponse.json({ items, algorithm: cached.algorithm, cached: true })36 }3738 // Get user's interaction history39 const { data: interactions } = await supabase40 .from('user_interactions')41 .select('item_id, interaction_type, rating')42 .eq('user_id', userId)43 .order('created_at', { ascending: false })44 .limit(50)4546 const interactedIds = interactions?.map((i) => i.item_id) ?? []4748 let algorithm = 'content-based'49 let recommendedIds: string[] = []5051 if (interactedIds.length >= 5) {52 // Collaborative: find similar users and recommend their items53 algorithm = 'collaborative'54 const { data: recs } = await supabase.rpc('collaborative_recommendations', {55 target_user_id: userId,56 exclude_ids: interactedIds,57 rec_limit: limit,58 })59 recommendedIds = recs?.map((r: { item_id: string }) => r.item_id) ?? []60 }6162 if (recommendedIds.length < limit) {63 // Content-based fallback: find similar items to user's favorites64 const topItemId = interactedIds[0] ?? null65 if (topItemId) {66 const { data: similar } = await supabase.rpc('similar_items', {67 target_item_id: topItemId,68 exclude_ids: [...interactedIds, ...recommendedIds],69 match_count: limit - recommendedIds.length,70 })71 recommendedIds.push(...(similar?.map((s: { id: string }) => s.id) ?? []))72 algorithm = interactedIds.length >= 5 ? 'hybrid' : 'content-based'73 }74 }7576 // Cache results77 if (recommendedIds.length > 0) {78 await supabase.from('recommendations_cache').insert({79 user_id: userId,80 item_ids: recommendedIds,81 algorithm,82 expires_at: new Date(Date.now() + 3600000).toISOString(),83 })84 }8586 const { data: items } = await supabase87 .from('items')88 .select('id, name, description, category, metadata')89 .in('id', recommendedIds)9091 return NextResponse.json({ items, algorithm, cached: false })92}Expected result: GET /api/recommendations/generate?user_id=xxx returns personalized recommendations using collaborative filtering for active users and content-based fallback for new users, with caching.
Build the interaction tracking Server Action
Create a Server Action that records user interactions (views, clicks, purchases, ratings) on items. This data feeds both the collaborative filtering algorithm and the content-based similarity recommendations.
1'use server'23import { createClient } from '@/lib/supabase/server'4import { revalidatePath } from 'next/cache'56export async function trackInteraction(input: {7 user_id: string8 item_id: string9 interaction_type: 'view' | 'click' | 'purchase' | 'rating'10 rating?: number11}) {12 const supabase = await createClient()1314 await supabase.from('user_interactions').insert({15 user_id: input.user_id,16 item_id: input.item_id,17 interaction_type: input.interaction_type,18 rating: input.rating ?? null,19 })2021 // Invalidate recommendations cache for this user22 await supabase23 .from('recommendations_cache')24 .delete()25 .eq('user_id', input.user_id)2627 revalidatePath('/')28}Expected result: Every user interaction is logged and the recommendation cache for that user is invalidated, ensuring fresh recommendations on next request.
Create the Supabase RPC functions for similarity search
Build the PostgreSQL functions that power the recommendation algorithms. The similar_items function uses pgvector cosine distance, and collaborative_recommendations finds items liked by similar users.
1// Paste this prompt into V0's AI chat:2// Create Supabase SQL migration with these functions:3// 1. similar_items(target_item_id uuid, exclude_ids uuid[], match_count int):4// SELECT id, name, description, category, 1 - (embedding <=> (SELECT embedding FROM items WHERE id = target_item_id)) as similarity5// FROM items6// WHERE id != target_item_id AND id != ALL(exclude_ids) AND embedding IS NOT NULL7// ORDER BY embedding <=> (SELECT embedding FROM items WHERE id = target_item_id)8// LIMIT match_count;9// 2. collaborative_recommendations(target_user_id uuid, exclude_ids uuid[], rec_limit int):10// Find users who interacted with same items as target user,11// then return items those similar users interacted with but target user has not,12// ranked by number of similar-user interactions.13// 3. Ensure the IVFFlat index exists on items.embedding column for fast vector search.14// Also create the API route at app/api/recommendations/user/[userId]/route.ts that returns cached or fresh recommendations.Pro tip: The IVFFlat index with 100 lists gives sub-second cosine similarity search for up to 1 million items. For larger catalogs, increase the lists parameter.
Expected result: Two Supabase RPC functions are created: similar_items for content-based filtering and collaborative_recommendations for collaborative filtering. Both are optimized with the IVFFlat index.
Complete code
1import { NextRequest, NextResponse } from 'next/server'2import { createClient } from '@supabase/supabase-js'34export const runtime = 'edge'56export async function GET(req: NextRequest) {7 const supabase = createClient(8 process.env.SUPABASE_URL!,9 process.env.SUPABASE_SERVICE_ROLE_KEY!10 )1112 const userId = req.nextUrl.searchParams.get('user_id')13 const limit = parseInt(req.nextUrl.searchParams.get('limit') ?? '10')1415 if (!userId) {16 return NextResponse.json({ error: 'user_id required' }, { status: 400 })17 }1819 const { data: cached } = await supabase20 .from('recommendations_cache')21 .select('item_ids, algorithm')22 .eq('user_id', userId)23 .gt('expires_at', new Date().toISOString())24 .limit(1)25 .single()2627 if (cached) {28 const { data: items } = await supabase29 .from('items')30 .select('id, name, description, category')31 .in('id', cached.item_ids.slice(0, limit))32 return NextResponse.json({ items, algorithm: cached.algorithm })33 }3435 const { data: interactions } = await supabase36 .from('user_interactions')37 .select('item_id')38 .eq('user_id', userId)39 .limit(50)4041 const interactedIds = interactions?.map((i) => i.item_id) ?? []42 const topItem = interactedIds[0]4344 if (!topItem) {45 const { data: popular } = await supabase46 .from('items')47 .select('id, name, description, category')48 .limit(limit)49 return NextResponse.json({ items: popular, algorithm: 'popular' })50 }5152 const { data: similar } = await supabase.rpc('similar_items', {53 target_item_id: topItem,54 exclude_ids: interactedIds,55 match_count: limit,56 })5758 return NextResponse.json({59 items: similar,60 algorithm: 'content-based',61 })62}Customization ideas
Add real-time recommendation updates
Use Supabase Realtime to listen for new interactions and refresh the recommendation widget without page reload when users browse.
Build a recommendation explanation feature
Show users why each item was recommended — because you viewed X, because users like you also liked Y — using the interaction and similarity data.
Add diversity controls
Implement category-based diversification so recommendations don't cluster around a single category, ensuring users discover new types of items.
Integrate A/B testing for algorithms
Split users between collaborative and content-based algorithms, track click-through and conversion rates, and automatically favor the winning approach.
Common pitfalls
Pitfall: Exposing OPENAI_API_KEY with a NEXT_PUBLIC_ prefix
How to avoid: Store OPENAI_API_KEY in the Vars tab without any prefix. Only call the OpenAI API from API routes and Server Actions, never from client components.
Pitfall: Not creating the IVFFlat index on the embedding column
How to avoid: Create the index: CREATE INDEX ON items USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100). This enables sub-second search for up to 1 million items.
Pitfall: Computing recommendations on every page load
How to avoid: Cache recommendations in the recommendations_cache table with a TTL (e.g., 1 hour). Check cache first and only recompute when expired or invalidated by new interactions.
Best practices
- Use Edge Runtime for the recommendation endpoint to minimize latency for users worldwide
- Cache recommendations with a 1-hour TTL and invalidate when new interactions are tracked
- Store OPENAI_API_KEY and SUPABASE_SERVICE_ROLE_KEY in Vars tab without NEXT_PUBLIC_ prefix — server-only
- Use the IVFFlat index with vector_cosine_ops for sub-second similarity search at scale
- Implement a hybrid approach: collaborative filtering for active users, content-based for new users, popular items for anonymous visitors
- Use V0's prompt queuing to generate the embedding pipeline, recommendation API, and admin dashboard in sequence
- Track all interaction types (view/click/purchase/rating) to build a rich user preference profile
AI prompts to try
Copy these prompts to build this project faster.
I'm building a recommendations engine with Next.js and Supabase (pgvector). Write two PostgreSQL functions: 1) similar_items(target_item_id uuid, exclude_ids uuid[], match_count int) that uses cosine distance (<=>) to find the most similar items by embedding. 2) collaborative_recommendations(target_user_id uuid, exclude_ids uuid[], rec_limit int) that finds users with similar interaction patterns and recommends items those users liked but the target user hasn't seen. Include the IVFFlat index creation.
Create a 'You might also like' recommendation widget component. Accept a current item ID and user ID. Fetch recommendations from /api/recommendations/generate. Display as a horizontal Carousel of shadcn/ui Cards showing item name, description preview, and category Badge. Include a Skeleton loading state. Track 'view' interactions when items come into viewport using IntersectionObserver in useEffect.
Frequently asked questions
What V0 plan do I need for a recommendations engine?
V0 Free works for the basic build, but Premium ($20/month) is recommended because the engine has multiple complex components (embedding pipeline, recommendation API, admin dashboard) that benefit from prompt queuing.
How much does the OpenAI Embeddings API cost?
text-embedding-3-small costs $0.02 per 1 million tokens (about $0.00002 per item description). Embeddings only need to be generated once per item, making the cost negligible even for large catalogs.
What is the cold-start problem and how is it solved?
New users have no interaction history, so collaborative filtering cannot work. The solution is a hybrid approach: fall back to content-based filtering (pgvector similarity on item embeddings) until the user has at least 5 interactions, then switch to collaborative filtering.
How many items can pgvector handle?
With the IVFFlat index (lists=100), pgvector handles up to 1 million items with sub-second similarity search on Supabase free tier. For larger catalogs, increase the lists parameter and consider Supabase Pro for more memory.
How do I deploy the recommendations engine?
Click Share then Publish to Production in V0. Set OPENAI_API_KEY and SUPABASE_SERVICE_ROLE_KEY in the Vars tab without NEXT_PUBLIC_ prefix. The recommendation endpoint uses Edge Runtime for global low-latency serving.
Can RapidDev help build a custom recommendations engine?
Yes. RapidDev has built 600+ apps including ML-powered recommendation systems for e-commerce, content platforms, and SaaS products. Book a free consultation to discuss your specific recommendation needs.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation