Connect Redis to Lovable using Upstash's HTTP REST API — the only Redis client compatible with Deno Edge Functions. Store your Upstash REST URL and token in Cloud Secrets, then create an Edge Function that handles get, set, increment, list, and expire operations. Use Redis for caching, rate limiting, session storage, and leaderboards where sub-millisecond speed matters more than persistence.
Add In-Memory Caching and Rate Limiting to Lovable with Redis and Upstash
Standard Redis clients use TCP socket connections, which Deno's Edge Function runtime does not support — only HTTPS fetch calls work. Upstash solves this by providing serverless Redis with a full HTTP REST API: every Redis command becomes a standard HTTPS request, making it perfectly compatible with Deno and Lovable's Edge Function environment.
Upstash also fits Lovable's deployment model well. It offers a generous free tier (10,000 commands per day), per-request pricing with no minimum fee, and global replication so your Redis instance is close to whatever edge region your Lovable Cloud project uses. You create a Redis database, copy the REST URL and REST token from the Upstash dashboard, and you're ready to build.
The most impactful use cases for Redis in a Lovable app are caching expensive database queries (so a complex Supabase join runs once and the result is served from memory for the next minute), rate limiting API endpoints (preventing abuse by tracking request counts per user per time window), and leaderboards or counters (where atomic increment operations ensure accuracy under concurrent writes). These are exactly the scenarios where Redis's speed advantage over a PostgreSQL database query is most noticeable.
Integration method
Redis connects to Lovable through an Edge Function that calls Upstash's HTTP REST API. Upstash provides serverless Redis with a REST endpoint that works in Deno's fetch-only environment where TCP socket connections are not available. Your Upstash URL and token are stored in Cloud Secrets.
Prerequisites
- A Lovable account with a project open and Lovable Cloud enabled
- An Upstash account at upstash.com (free tier supports 10,000 commands/day and 1 database)
- An Upstash Redis database created in the region closest to your Lovable Cloud project region
- Your Upstash REST URL and REST token (from the database's REST API tab in the Upstash Console)
- A clear use case for Redis — caching, rate limiting, sessions, or leaderboards
Step-by-step guide
Create an Upstash Redis Database
Create an Upstash Redis Database
Go to upstash.com and sign up or log in. Click 'Create Database' in the Redis section. Give your database a name and select a region — choose the region closest to your Lovable Cloud project's hosting region (Americas, Europe, or Asia Pacific) to minimize latency. On the free plan, select the 'Regional' type (one region, lower cost) rather than 'Global' (multi-region replication) unless your users span multiple continents. For most Lovable apps, regional Redis is sufficient. After creation, open the database and click the 'REST API' tab. You'll see two critical values: - UPSTASH_REDIS_REST_URL: the HTTPS endpoint for your Redis database (format: https://clever-salamander-12345.upstash.io) - UPSTASH_REDIS_REST_TOKEN: a bearer token for authenticating REST API calls Copy both values — you'll add them to Lovable's Cloud Secrets in the next step. Upstash's REST API uses a simple format: POST to {REST_URL}/{COMMAND}/{KEY}/{VALUE} with the token in the Authorization header. For example, setting a key is: POST https://your-db.upstash.io/set/mykey/myvalue with Authorization: Bearer {token}.
Pro tip: Upstash also has an official @upstash/redis npm package that provides a clean TypeScript client over the REST API. This package is Deno-compatible and simplifies Redis command syntax significantly compared to raw HTTP calls. Ask Lovable to use npm:@upstash/redis for cleaner code.
Expected result: An Upstash Redis database is created and running. You have the REST URL and REST token from the REST API tab ready to paste into Lovable's secrets.
Add Upstash Credentials to Cloud Secrets
Add Upstash Credentials to Cloud Secrets
In Lovable, click the '+' button next to the Preview panel to open the Cloud tab, then navigate to Secrets. Add these two secrets: - UPSTASH_REDIS_REST_URL: the full HTTPS REST URL from the Upstash REST API tab - UPSTASH_REDIS_REST_TOKEN: the authentication token from the same tab These two values are all you need to make Redis calls from an Edge Function. The Upstash REST API is intentionally simple — every Redis command maps to a URL path, making it easy to understand what's happening. For example, GET key becomes GET {REST_URL}/get/key, and SET key value with 60-second TTL becomes POST {REST_URL}/set/key/value/ex/60. Unlike traditional Redis clients that maintain a persistent TCP connection, Upstash's REST API is stateless — each command is a separate HTTPS request. This is slightly slower for sequential operations but works perfectly in Deno's stateless Edge Function environment. For most caching and rate limiting use cases, a few milliseconds of HTTP overhead is acceptable.
Pro tip: Upstash supports multiple databases on the free plan only through separate database creation. If you need to separate cache data (which you're fine losing) from session data (which you cannot afford to lose), create two separate Upstash databases with their own credentials.
Expected result: UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN are saved in Cloud Secrets. Edge Functions can now authenticate Redis calls via Deno.env.get().
Create the Redis Edge Function
Create the Redis Edge Function
Create an Edge Function that handles Redis operations using the @upstash/redis package. This package provides a typed Redis client that works over the REST API, giving you Redis command syntax (client.get(), client.set(), client.incr()) without writing raw HTTP calls. The Edge Function should support the Redis operations your use cases need. For caching: GET and SET with EX (expire in seconds). For rate limiting: INCR, EXPIRE, and TTL. For leaderboards: ZADD, ZREVRANGE, ZREVRANK, ZSCORE. For session storage: HSET, HGET, HGETALL. For queues and recent activity: LPUSH, LRANGE, LTRIM. A flexible approach is to create a generic Redis command router that accepts any Redis command and its arguments, then executes it. This is more powerful than hardcoding specific operations but requires more careful input validation to prevent users from executing arbitrary Redis commands. For production apps, implement an allowlist of permitted commands.
Create an Edge Function called redis-proxy at supabase/functions/redis-proxy/index.ts. Import Redis from npm:@upstash/redis and initialize it with UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN from Deno environment variables. Accept POST requests with a JSON body containing: command (string, one of: get, set, setex, del, incr, expire, ttl, exists, lpush, lrange, zadd, zrevrange, zrevrank, zscore), key (string), value (optional, string or number), ttl (optional number of seconds for setex), args (optional array for commands needing multiple arguments). Execute the Redis command and return the result. Include CORS headers and validation that rejects unsupported commands.
Paste this in Lovable chat
1// supabase/functions/redis-proxy/index.ts2import { serve } from "https://deno.land/std@0.168.0/http/server.ts";3import { Redis } from "npm:@upstash/redis";45const corsHeaders = {6 "Access-Control-Allow-Origin": "*",7 "Access-Control-Allow-Headers": "authorization, x-client-info, apikey, content-type",8 "Access-Control-Allow-Methods": "POST, OPTIONS",9};1011const ALLOWED_COMMANDS = new Set([12 "get", "set", "setex", "del", "incr", "expire", "ttl", "exists",13 "lpush", "lrange", "zadd", "zrevrange", "zrevrank", "zscore", "zincrby",14]);1516serve(async (req: Request) => {17 if (req.method === "OPTIONS") {18 return new Response("ok", { headers: corsHeaders });19 }2021 try {22 const redis = new Redis({23 url: Deno.env.get("UPSTASH_REDIS_REST_URL") || "",24 token: Deno.env.get("UPSTASH_REDIS_REST_TOKEN") || "",25 });2627 const { command, key, value, ttl, args = [] } = await req.json();2829 if (!ALLOWED_COMMANDS.has(command)) {30 return new Response(31 JSON.stringify({ error: `Command '${command}' is not allowed` }),32 { status: 400, headers: { ...corsHeaders, "Content-Type": "application/json" } }33 );34 }3536 let result;37 if (command === "get") result = await redis.get(key);38 else if (command === "set") result = await redis.set(key, value);39 else if (command === "setex") result = await redis.setex(key, ttl, value);40 else if (command === "del") result = await redis.del(key);41 else if (command === "incr") result = await redis.incr(key);42 else if (command === "expire") result = await redis.expire(key, ttl);43 else if (command === "ttl") result = await redis.ttl(key);44 else if (command === "lpush") result = await redis.lpush(key, ...args);45 else if (command === "lrange") result = await redis.lrange(key, args[0], args[1]);46 else if (command === "zadd") result = await redis.zadd(key, { score: args[0], member: args[1] });47 else if (command === "zrevrange") result = await redis.zrevrange(key, args[0], args[1], { withScores: args[2] === "WITHSCORES" });48 else if (command === "zrevrank") result = await redis.zrevrank(key, value);49 else if (command === "zscore") result = await redis.zscore(key, value);5051 return new Response(52 JSON.stringify({ result }),53 { status: 200, headers: { ...corsHeaders, "Content-Type": "application/json" } }54 );55 } catch (error) {56 return new Response(57 JSON.stringify({ error: error.message }),58 { status: 500, headers: { ...corsHeaders, "Content-Type": "application/json" } }59 );60 }61});Pro tip: For rate limiting, use Redis Pipeline to execute INCR and EXPIRE atomically in a single network round trip. This prevents a race condition where two concurrent requests could both bypass the limit because the EXPIRE had not been set yet when the second INCR ran.
Expected result: The redis-proxy Edge Function is deployed. Cloud → Edge Functions shows it as active. You can test it by calling the function with a simple set/get pair to confirm Redis connectivity.
Implement Caching in an Existing Edge Function
Implement Caching in an Existing Edge Function
The most immediate value from Redis is caching expensive Supabase queries. Rather than calling Redis through the generic redis-proxy, add caching logic directly to your existing data-fetching Edge Functions by calling the redis-proxy internally, or by initializing the Upstash Redis client directly in those functions. The caching pattern is: check the cache first (redis GET), return cached data if it exists (cache hit), or execute the expensive query and store the result in Redis (redis SETEX with TTL) before returning it (cache miss). Choose your TTL based on how stale the data can be — 30 seconds for near-real-time data, 5 minutes for slowly changing data, 1 hour for mostly static content. For Lovable apps, the most impactful queries to cache are aggregate counts (total users, total revenue), leaderboards and rankings, and any data that joins multiple tables with many rows. Single-row lookups by primary key (like fetching a user's profile) usually don't benefit from caching because Supabase is already fast for those.
Update my dashboard-stats Edge Function to cache results in Redis using Upstash. Import Redis from npm:@upstash/redis at the top of the function. Before running the Supabase queries, check if 'dashboard:stats' exists in Redis and return the cached JSON if found. If not cached, run the Supabase queries, serialize the result to JSON, store it in Redis with SETEX using a 120-second TTL, then return the result. Add a 'cached' boolean field to the response so I can see whether the data came from cache or fresh from the database.
Paste this in Lovable chat
Pro tip: Add a cache-busting mechanism for admin operations that need to invalidate the cache immediately. A simple approach is a secret admin endpoint that calls redis DEL on specific cache keys, forcing the next request to rebuild the cache from fresh data.
Expected result: The first request to the dashboard-stats endpoint takes the normal query time. Subsequent requests within 120 seconds return instantly with 'cached: true' in the response. Cloud → Logs shows 'cache hit' messages for subsequent requests.
Build a Rate Limiter Middleware Pattern
Build a Rate Limiter Middleware Pattern
Rate limiting is one of Redis's most reliable use cases in Lovable. The pattern uses Redis's INCR command (which atomically increments a counter, creating it if it doesn't exist) combined with EXPIRE to reset the counter after a time window. To apply rate limiting to your Edge Functions, create a shared rate limiting helper that you call at the start of each protected function. The helper reads the user's identifier (their Supabase JWT user ID, IP address, or API key), generates a Redis key like rate:{userId}:{windowHour}, increments the counter, sets the expiry on the first request of each window, and returns whether the limit has been exceeded. Returning helpful rate limit information to the frontend improves the user experience: include the current count, the maximum allowed, the limit window (per hour, per minute), and when the current window resets. This information can be displayed in your UI as a 'You've used X of Y requests this hour' counter.
Create a reusable rate limiting helper function in my supabase/functions/_shared/rateLimiter.ts file. It should: import Redis from npm:@upstash/redis, accept userId and maxRequests (per hour) parameters, generate a Redis key 'rate:{userId}:{currentHour}', use INCR to count requests and EXPIRE to set a 3600-second TTL on the first request, and return an object with allowed (boolean), count (number), limit (number), and resetAt (ISO date string of when the current hour ends). Import and use this helper in my AI-feature Edge Function to limit users to 10 requests per hour with a 429 response that includes the reset time.
Paste this in Lovable chat
Expected result: The AI feature Edge Function rejects the 11th request per hour with a 429 response including a clear message and reset time. The first 10 requests per hour proceed normally. Redis counters are visible in the Upstash Console's data browser.
Common use cases
Cache expensive Supabase queries for faster page loads
Your Lovable app has a dashboard that runs several complex Supabase queries joining multiple tables. These queries take 500ms-2s to execute. By caching the results in Redis for 60 seconds, most users get sub-50ms responses from the cache while the data stays fresh enough for the use case.
I want to cache the results of my expensive dashboard queries in Redis via Upstash. I've stored UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN in Cloud Secrets. Create an Edge Function called cached-query that: accepts a cacheKey string and a query function description, checks Upstash Redis for a cached result, returns the cached result if it exists and is less than 60 seconds old, or executes the Supabase query and stores the result in Redis with a 60-second TTL if no cache exists. Use this for my /api/dashboard-stats endpoint.
Copy this prompt to try it in Lovable
Implement per-user API rate limiting to prevent abuse
Your Lovable app has an AI feature that calls an expensive external API. You want to limit each user to 10 requests per hour. Redis tracks the request count per user using atomic INCR operations, and an Edge Function rejects requests that exceed the limit.
I need to rate limit my AI feature to 10 requests per hour per user. Using Upstash Redis with credentials from UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN in Cloud Secrets, create an Edge Function called rate-limiter that: takes a userId parameter, increments a Redis counter for key 'rate_limit:{userId}:{hour}', sets the key to expire in 3600 seconds if it's the first request, returns the current count and whether the request is allowed (count <= 10). Wrap my AI Edge Function with this rate limiter and return a 429 Too Many Requests error with a message showing when the limit resets.
Copy this prompt to try it in Lovable
Build a real-time leaderboard with atomic counters
Your app has a points system where users earn points for completing actions. Redis sorted sets track scores efficiently, enabling fast leaderboard queries that would require complex PostgreSQL queries with sorting and ranking.
I need a leaderboard for my app's points system. Using Upstash Redis with credentials in Cloud Secrets, create Edge Functions for: adding points to a user (ZADD with INCR to a 'leaderboard' sorted set), getting the top 10 users with scores (ZREVRANGE with WITHSCORES), and getting a specific user's rank and score (ZREVRANK and ZSCORE). Then build a leaderboard page showing the top 10 users with medals for 1st/2nd/3rd place and a 'Your rank' section showing the current user's position.
Copy this prompt to try it in Lovable
Troubleshooting
Edge Function returns 'Unauthorized' or 401 when calling Upstash REST API
Cause: The UPSTASH_REDIS_REST_TOKEN secret is missing, incorrect, or not matching the token for the specific Upstash database being targeted.
Solution: Open the Upstash Console, navigate to your Redis database, and click the REST API tab. Verify that the token you stored in Cloud Secrets exactly matches the 'UPSTASH_REDIS_REST_TOKEN' value shown. The token is long (100+ characters) and must be copied completely without line breaks. Delete the existing secret in Lovable and re-add it by copying the token fresh from Upstash.
Redis GET returns null for a key that was just SET in a previous request
Cause: Multiple Upstash databases may be configured, and the SET and GET calls are reaching different databases. Alternatively, the key expired between SET and GET due to a very short TTL.
Solution: Confirm that UPSTASH_REDIS_REST_URL points to the same database in both set and get operations. In the Upstash Console, use the Data Browser to verify the key exists and see its TTL value. If the key shows up with an unexpectedly short TTL, check whether SETEX is being called with the correct TTL parameter — the parameter order for setex is key, ttl (seconds), value.
Rate limiter allows more requests than the configured limit due to race conditions
Cause: Concurrent requests arrive simultaneously before the EXPIRE is set on the first INCR, allowing multiple requests to bypass the limit.
Solution: Use Redis Pipeline to execute INCR and EXPIRE as an atomic batch. This ensures EXPIRE is always set even under concurrent load. Alternatively, use a Redis Lua script for atomic check-and-increment, but this requires @upstash/redis's eval() support.
1// Use pipeline for atomic INCR + EXPIRE2const redis = new Redis({ url: ..., token: ... });3const key = `rate:${userId}:${windowHour}`;45const pipeline = redis.pipeline();6pipeline.incr(key);7pipeline.expire(key, 3600);8const [count] = await pipeline.exec();910return { allowed: count <= maxRequests, count, limit: maxRequests };Redis ZADD for leaderboards silently overwrites existing scores instead of adding to them
Cause: Standard ZADD replaces the member's existing score. To increment a score, you need ZINCRBY or ZADD with the INCR option.
Solution: For leaderboards where you want to add points to a user's existing score, use ZINCRBY instead of ZADD. With the @upstash/redis package, call redis.zincrby(leaderboardKey, pointsToAdd, userId). This atomically adds to the existing score and creates the member with the given score if they don't exist yet.
1// Wrong: replaces existing score2await redis.zadd("leaderboard", { score: 10, member: userId });34// Correct: adds to existing score5await redis.zincrby("leaderboard", 10, userId);Best practices
- Use Upstash Redis over direct Redis connections — Upstash's HTTP REST API is the only Redis client compatible with Deno's Edge Function environment
- Choose cache TTLs based on acceptable data staleness — 30 seconds for operational data, 5 minutes for aggregates, 1 hour for mostly static content
- Always include EXPIRE or TTL on rate limit keys — keys without expiry accumulate forever and eventually fill your Redis memory quota
- Use ZINCRBY instead of ZADD for leaderboard point accumulation — ZADD replaces the score while ZINCRBY adds to it
- Implement a cache allowlist of permitted Redis commands in your Edge Function to prevent execution of destructive commands like FLUSHDB or KEYS *
- Store cache keys with consistent naming conventions like entity:id:operation to make them easy to inspect in the Upstash Console data browser
- Use Redis Pipeline for rate limiting to execute INCR and EXPIRE atomically and prevent race condition exploits under concurrent load
- Monitor your Upstash daily command usage in the Upstash Console — the free tier limit of 10,000 commands/day can be exceeded by aggressive caching of high-traffic endpoints
Alternatives
Supabase PostgreSQL handles persistent data storage with complex queries — use it for your main data and add Redis only for caching, rate limiting, and temporary data that can be lost.
Lovable's native Supabase includes built-in database and auth — use it as your primary data store and add Redis only when you specifically need sub-millisecond caching performance.
MongoDB Atlas provides persistent document storage — choose Redis alongside MongoDB for caching Atlas query results, not as a replacement for persistent data.
Frequently asked questions
Why do I need Upstash instead of connecting directly to a Redis server?
Standard Redis clients use TCP socket connections on port 6379, and Deno's Edge Function runtime only supports outbound HTTPS requests via fetch — TCP sockets are not available. Upstash solves this by wrapping Redis in a standard HTTPS REST API where every Redis command becomes an HTTP request. If you have an existing Redis server, you could use Upstash as a proxy or configure Redis Streams with HTTP bridge, but Upstash's managed serverless Redis is simpler and cheaper for most Lovable use cases.
Is Redis data persistent? Will I lose my cache if Upstash restarts?
Upstash Redis persists data by default using AOF (Append Only File) logging, so data survives restarts. However, you should still treat Redis as a cache layer — design your app to gracefully handle cache misses by fetching from the primary database. Cache keys with TTL expire automatically, which is the intended behavior for most caching use cases. Do not store data in Redis that you cannot afford to lose or recreate.
How does Redis caching compare to Supabase's built-in query performance?
Supabase PostgreSQL is already fast for most queries — indexed lookups return in 10-50ms. Redis caching makes sense when your query takes 200ms+ (complex joins or aggregates), when the same data is requested by many users frequently, or when you need sub-10ms response times for high-traffic endpoints. For typical Lovable app queries on tables with proper indexes, adding Redis complexity is often not worth the benefit. Start with Supabase and add Redis only when you observe actual performance problems.
Can I use Redis for user session storage instead of Supabase Auth?
You can store session data (user preferences, shopping cart contents, temporary state) in Redis, but do not replace Supabase Auth's JWT authentication with Redis-stored sessions. Supabase Auth handles token signing, refresh token rotation, and session security correctly. Use Redis as a supplement for session data that does not need to be authenticated — like temporary form progress, recently viewed items, or UI state that resets on logout.
What happens when I hit Upstash's free tier limit of 10,000 commands per day?
Upstash throttles or rejects commands once you exceed the daily limit on the free tier. Your Edge Functions will receive errors from the Upstash API, and cached data will not be served. To avoid this, monitor your command usage in the Upstash Console and either upgrade to a paid plan or reduce the number of Redis operations by caching more aggressively or batching commands. The pay-as-you-go plan charges $0.20 per 100,000 commands with no monthly minimum.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation