Skip to main content
RapidDev - Software Development Agency
lovable-integrationsEdge Function Integration

How to Integrate Lovable with UserTesting

Integrate UserTesting with a Lovable app by creating a Supabase Edge Function that proxies UserTesting API calls using an API key stored in Cloud → Secrets. Fetch test results, video highlight metadata, and participant metrics via the API. Display research insights in a dashboard built in Lovable — bridging the gap between qualitative user research and your app's development workflow.

What you'll learn

  • How to authenticate with the UserTesting API using an API key stored in Lovable Cloud Secrets
  • How to create a Supabase Edge Function that fetches test results, sessions, and highlight clips from the UserTesting API
  • How to build a research insights dashboard in Lovable that displays participant metrics and test summaries
  • How to display video highlight metadata and link to full session recordings in UserTesting
  • How to cache UserTesting research data in Supabase for fast dashboard rendering without API rate limits
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate19 min read35 minutesAnalyticsMarch 2026RapidDev Engineering Team
TL;DR

Integrate UserTesting with a Lovable app by creating a Supabase Edge Function that proxies UserTesting API calls using an API key stored in Cloud → Secrets. Fetch test results, video highlight metadata, and participant metrics via the API. Display research insights in a dashboard built in Lovable — bridging the gap between qualitative user research and your app's development workflow.

Integrating UserTesting Research Data into Your Lovable App

UserTesting is the leading platform for moderated and unmoderated user research, connecting product teams with real participants who complete tasks in their apps while recording their screen and voice. Unlike behavioral analytics tools that show what users do, UserTesting shows you why they do it — the verbalized frustrations, the moments of confusion, and the 'aha' moments that quantitative data cannot capture. For Lovable app builders, integrating UserTesting data into your development workflow means research insights are visible alongside the product being built, rather than buried in a separate research platform.

The UserTesting API provides programmatic access to test results, participant sessions, highlight reels, and aggregate metrics. The primary use case for a Lovable integration is building a research insights dashboard that surfaces key findings from recent tests — participant satisfaction scores, task success rates, and curated highlight clips — inside the same interface where your team manages features and tracks metrics. This eliminates the 'research silo' problem where valuable user feedback exists in UserTesting but never influences product decisions because it is disconnected from the team's daily workflow.

The integration architecture uses Supabase Edge Functions as a secure proxy between your Lovable frontend and the UserTesting API. The UserTesting API key is a high-privilege credential that provides access to all research data, participant information, and test configurations in your account. It must live in Cloud → Secrets and be accessed only via Deno.env.get() in Edge Functions. Lovable's security infrastructure blocks approximately 1,200 hardcoded API keys daily and holds SOC 2 Type II certification — the UserTesting API key must never appear in frontend React code. Response data from UserTesting (which may contain participant PII like email addresses and video recordings) should be handled with appropriate privacy considerations, storing only the minimum necessary data in your Supabase database.

Integration method

Edge Function Integration

UserTesting integrates with Lovable through Supabase Edge Functions that proxy UserTesting API calls using an API key stored in Cloud → Secrets. The integration enables fetching test results, participant sessions, highlight reels, and metrics from UserTesting's REST API and displaying them in a research insights dashboard built in Lovable. This brings qualitative user research data into the same interface where your team manages the product being researched.

Prerequisites

  • A Lovable account with an existing project that has Supabase configured
  • A UserTesting account with API access — API access requires a UserTesting Enterprise or Plus plan
  • A UserTesting API key generated from your account settings (Settings → Integrations → API Keys or contact UserTesting support to enable API access)
  • At least two or three completed UserTesting tests in your account to verify the dashboard is displaying real data
  • Basic understanding of UserTesting's data model — Tests contain Sessions (individual participant recordings), and Sessions contain Tasks with metrics

Step-by-step guide

1

Get your UserTesting API credentials and store them in Lovable Secrets

UserTesting's API access requires an API key from your account. The process for obtaining this key varies depending on your UserTesting plan — on Enterprise accounts, API keys are typically available in Account Settings → Integrations or Developer Settings. On Plus accounts, you may need to contact UserTesting support to request API access. Log in to your UserTesting account and navigate to your account settings — look for 'Integrations', 'Developer', or 'API' sections in the settings menu. UserTesting's API uses token-based authentication. Depending on the API version your account has access to, authentication may use a Bearer token, a client ID + client secret combination for OAuth2, or an API key passed as a header. The most common setup for UserTesting's v1 REST API uses an API key in the Authorization header formatted as 'Bearer [api_key]' or as a custom 'X-API-Token' header — consult UserTesting's API documentation at developers.usertesting.com for the exact header format your account uses. If you are on a plan that requires OAuth2, you will need a client_id and client_secret from your developer settings. The OAuth2 flow generates access tokens with expiration — for a backend integration, use the client credentials grant type to generate long-lived tokens that can be stored in Cloud Secrets and refreshed when they expire. Once you have your credentials, go to your Lovable project, open the Cloud tab by clicking the + icon next to Preview, click Secrets, and add USERTESTING_API_KEY with your API key value. If using OAuth2, add USERTESTING_CLIENT_ID and USERTESTING_CLIENT_SECRET. Also store the API base URL as USERTESTING_BASE_URL — UserTesting's API base URL is typically https://app.usertesting.com/api/v1 but may vary by account region. Save all secrets.

Pro tip: UserTesting's API documentation and access requirements change frequently as the platform evolves. If you cannot find API credentials in your account settings, contact UserTesting support and ask specifically about 'REST API access' and 'API key generation' for your account tier. Enterprise plans typically have dedicated support channels for integration questions.

Expected result: USERTESTING_API_KEY, USERTESTING_BASE_URL, and optionally USERTESTING_CLIENT_ID and USERTESTING_CLIENT_SECRET are stored in Lovable Cloud Secrets. You have verified the API key works by testing a simple API call like GET /tests using a tool like curl or Postman.

2

Create the UserTesting API proxy Edge Function

Create a Supabase Edge Function that acts as a secure proxy between your Lovable React components and the UserTesting API. This function reads the API key from Deno.env.get('USERTESTING_API_KEY'), constructs authenticated requests to UserTesting's API, and returns the response data to the frontend. The most useful UserTesting API endpoints for a research dashboard are: GET /tests (list all tests with status, dates, and participant counts), GET /tests/{test_id} (test details including all tasks, metrics configuration, and result summary), GET /tests/{test_id}/sessions (individual participant session records with task completion data and scores), and GET /tests/{test_id}/highlights (video highlight clips curated from the test). Design the proxy function to accept a 'resource' parameter identifying which UserTesting endpoint to call, and optional 'test_id' and pagination parameters. The function validates the resource type, constructs the appropriate UserTesting API URL, makes the authenticated request, and returns the response. Include error handling for common UserTesting API errors: 401 (invalid or expired API key), 403 (insufficient plan access for API), 404 (test not found), and 429 (rate limit exceeded). For the authorization header format, UserTesting's API documentation specifies the exact format for your plan type. A common format is 'Bearer [token]' — test this against the API before deploying to ensure authentication succeeds. UserTesting may also require specific Accept headers or API version headers on requests. To avoid hitting UserTesting's API rate limits on dashboard page loads, implement a caching layer: store fetched test results in a Supabase 'research_cache' table with a fetched_at timestamp, and return cached data if it is less than 30 minutes old. Only make fresh API calls when the cache is stale. This is especially important for test lists that change infrequently.

Lovable Prompt

Create a Supabase Edge Function called 'usertesting-proxy' that proxies UserTesting API requests. Accept POST requests with resource ('tests', 'test-detail', 'sessions', 'highlights'), optional test_id, page, and per_page parameters. Authenticate with USERTESTING_API_KEY as a Bearer token and use USERTESTING_BASE_URL for the API base. Return the API response as JSON. Cache test list results in a Supabase 'research_cache' table with a 30-minute TTL.

Paste this in Lovable chat

supabase/functions/usertesting-proxy/index.ts
1// supabase/functions/usertesting-proxy/index.ts
2import { serve } from 'https://deno.land/std@0.168.0/http/server.ts';
3import { createClient } from 'https://esm.sh/@supabase/supabase-js@2';
4
5const corsHeaders = {
6 'Access-Control-Allow-Origin': '*',
7 'Access-Control-Allow-Headers': 'authorization, x-client-info, apikey, content-type',
8};
9
10serve(async (req) => {
11 if (req.method === 'OPTIONS') return new Response('ok', { headers: corsHeaders });
12
13 try {
14 const apiKey = Deno.env.get('USERTESTING_API_KEY');
15 const baseUrl = Deno.env.get('USERTESTING_BASE_URL') ?? 'https://app.usertesting.com/api/v1';
16
17 if (!apiKey) {
18 return new Response(JSON.stringify({ error: 'UserTesting API key not configured' }), {
19 status: 500, headers: { ...corsHeaders, 'Content-Type': 'application/json' },
20 });
21 }
22
23 const supabase = createClient(
24 Deno.env.get('SUPABASE_URL') ?? '',
25 Deno.env.get('SUPABASE_SERVICE_ROLE_KEY') ?? ''
26 );
27
28 const { resource, test_id, page = 1, per_page = 20 } = await req.json();
29
30 // Check cache for test list
31 if (resource === 'tests') {
32 const { data: cached } = await supabase
33 .from('research_cache')
34 .select('data, fetched_at')
35 .eq('cache_key', 'tests_list')
36 .single();
37 const cacheAge = cached?.fetched_at
38 ? (Date.now() - new Date(cached.fetched_at).getTime()) / 60000
39 : Infinity;
40 if (cached && cacheAge < 30) {
41 return new Response(JSON.stringify(cached.data), {
42 headers: { ...corsHeaders, 'Content-Type': 'application/json', 'X-From-Cache': 'true' },
43 });
44 }
45 }
46
47 const utHeaders = {
48 'Authorization': `Bearer ${apiKey}`,
49 'Content-Type': 'application/json',
50 'Accept': 'application/json',
51 };
52
53 let apiUrl = '';
54 switch (resource) {
55 case 'tests': apiUrl = `${baseUrl}/tests?page=${page}&per_page=${per_page}`; break;
56 case 'test-detail': apiUrl = `${baseUrl}/tests/${test_id}`; break;
57 case 'sessions': apiUrl = `${baseUrl}/tests/${test_id}/sessions?per_page=${per_page}`; break;
58 case 'highlights': apiUrl = `${baseUrl}/tests/${test_id}/highlights`; break;
59 default:
60 return new Response(JSON.stringify({ error: 'Invalid resource' }), {
61 status: 400, headers: { ...corsHeaders, 'Content-Type': 'application/json' },
62 });
63 }
64
65 const response = await fetch(apiUrl, { headers: utHeaders });
66 const data = await response.json();
67
68 // Update cache for test list
69 if (resource === 'tests' && response.ok) {
70 await supabase.from('research_cache').upsert({
71 cache_key: 'tests_list',
72 data,
73 fetched_at: new Date().toISOString(),
74 }, { onConflict: 'cache_key' });
75 }
76
77 return new Response(JSON.stringify(data), {
78 status: response.status,
79 headers: { ...corsHeaders, 'Content-Type': 'application/json' },
80 });
81 } catch (error) {
82 return new Response(JSON.stringify({ error: String(error) }), {
83 status: 500, headers: { ...corsHeaders, 'Content-Type': 'application/json' },
84 });
85 }
86});

Pro tip: UserTesting's API responses may include participant PII (email addresses, names) in session records. When caching this data in Supabase, apply appropriate access controls — use Row Level Security policies to restrict who can query the research_cache table, and consider whether storing participant data in your database is necessary or whether linking directly to UserTesting's UI is sufficient.

Expected result: The usertesting-proxy Edge Function is deployed. Calling it with { resource: 'tests' } returns a list of tests from the UserTesting account. The Supabase research_cache table is populated with the test list. Subsequent calls within 30 minutes return cached data.

3

Build the research insights dashboard component

Create a React component that calls the usertesting-proxy Edge Function and displays UserTesting research data in a clear, actionable format for the product team. The dashboard should show the most important research metrics at a glance without requiring team members to log into UserTesting. The dashboard component should fetch the test list on mount, display a summary row for each test (test name, completion date, participant count, and average satisfaction score), and provide links to open the full test in UserTesting for detailed analysis. For each test, the satisfaction score is typically represented as a numeric rating from 1-5 that UserTesting aggregates across all participants — display this as a star rating or colored score indicator. For the test detail view, create a separate component or modal that fetches and displays session-level metrics when a test is selected. Show task completion rates (percentage of participants who successfully completed each task), average time to complete each task, and any highlight clips available via the API. Highlight clips are short video segments that testers or researchers have marked as significant — displaying their titles and linking to UserTesting for playback surfaces the most important moments from each test. Add a loading state for the initial data fetch and an error state for when the API is unavailable or the API key is invalid. The research cache means the dashboard loads quickly on repeat visits, showing cached data with a 'Last updated X minutes ago' indicator and a manual refresh button for when fresh data is needed. For the caching approach in the Supabase Edge Function, you will need a research_cache table in your Supabase database. Ask Lovable to create this table with columns: cache_key (text, primary key), data (jsonb), and fetched_at (timestamptz). Enable Row Level Security with a policy that allows only authenticated users to read from this table.

Lovable Prompt

Create a ResearchDashboard page component that fetches UserTesting data from the usertesting-proxy Edge Function. Show a loading state while fetching. Display each test in a card with: test name, completion date, participant count badge, average satisfaction score as stars (1-5), and an 'Open in UserTesting' link. Add a summary bar at the top showing total tests this month and overall satisfaction score. Include a manual refresh button. Handle the error state by showing a friendly message with a retry button.

Paste this in Lovable chat

src/pages/ResearchDashboard.tsx
1// src/pages/ResearchDashboard.tsx
2import { useEffect, useState, useCallback } from 'react';
3import { supabase } from '@/integrations/supabase/client';
4import { Card, CardHeader, CardTitle, CardContent } from '@/components/ui/card';
5import { Badge } from '@/components/ui/badge';
6import { Button } from '@/components/ui/button';
7import { RefreshCw, ExternalLink, Star } from 'lucide-react';
8
9interface UTTest {
10 id: number;
11 name: string;
12 created_at: string;
13 status: string;
14 sessions_count: number;
15 avg_satisfaction?: number;
16 url?: string;
17}
18
19export function ResearchDashboard() {
20 const [tests, setTests] = useState<UTTest[]>([]);
21 const [loading, setLoading] = useState(true);
22 const [error, setError] = useState<string | null>(null);
23 const [lastUpdated, setLastUpdated] = useState<Date | null>(null);
24
25 const fetchTests = useCallback(async () => {
26 setLoading(true);
27 setError(null);
28 try {
29 const { data, error: fnError } = await supabase.functions.invoke('usertesting-proxy', {
30 body: { resource: 'tests', per_page: 20 },
31 });
32 if (fnError) throw fnError;
33 setTests(Array.isArray(data?.tests) ? data.tests : []);
34 setLastUpdated(new Date());
35 } catch (err) {
36 setError('Failed to load research data. Check the UserTesting API key in Cloud Secrets.');
37 } finally {
38 setLoading(false);
39 }
40 }, []);
41
42 useEffect(() => { fetchTests(); }, [fetchTests]);
43
44 const avgScore = tests.length > 0
45 ? (tests.reduce((sum, t) => sum + (t.avg_satisfaction ?? 0), 0) / tests.length).toFixed(1)
46 : 'N/A';
47
48 if (loading) return <div className="p-8 animate-pulse"><div className="h-8 bg-gray-200 rounded w-1/4 mb-6" /></div>;
49
50 return (
51 <div className="p-6 space-y-6">
52 <div className="flex items-center justify-between">
53 <h1 className="text-2xl font-bold">Research Dashboard</h1>
54 <Button variant="outline" size="sm" onClick={fetchTests}>
55 <RefreshCw className="h-4 w-4 mr-2" /> Refresh
56 </Button>
57 </div>
58 {lastUpdated && <p className="text-sm text-gray-400">Last updated {lastUpdated.toLocaleTimeString()}</p>}
59 {error && <div className="p-4 bg-red-50 text-red-700 rounded-lg">{error}</div>}
60 <div className="grid grid-cols-2 gap-4">
61 <Card><CardContent className="pt-6"><p className="text-3xl font-bold">{tests.length}</p><p className="text-sm text-gray-500">Total Tests</p></CardContent></Card>
62 <Card><CardContent className="pt-6"><p className="text-3xl font-bold">{avgScore}</p><p className="text-sm text-gray-500">Avg Satisfaction</p></CardContent></Card>
63 </div>
64 <div className="space-y-3">
65 {tests.map((test) => (
66 <Card key={test.id}>
67 <CardHeader>
68 <div className="flex items-start justify-between">
69 <div>
70 <CardTitle className="text-base">{test.name}</CardTitle>
71 <p className="text-sm text-gray-500 mt-1">{new Date(test.created_at).toLocaleDateString()}</p>
72 </div>
73 <div className="flex items-center gap-2">
74 <Badge variant="secondary">{test.sessions_count} participants</Badge>
75 {test.url && <a href={test.url} target="_blank" rel="noopener noreferrer"><Button variant="ghost" size="sm"><ExternalLink className="h-4 w-4" /></Button></a>}
76 </div>
77 </div>
78 </CardHeader>
79 </Card>
80 ))}
81 </div>
82 </div>
83 );
84}

Pro tip: For complex UserTesting integrations — such as building automated research synthesis tools that extract themes from participant notes or creating real-time research request workflows — RapidDev's team can help architect the data pipeline between UserTesting, your Supabase database, and your Lovable dashboard.

Expected result: The ResearchDashboard page loads and displays UserTesting test cards with participant counts and satisfaction scores. The summary bar shows aggregate metrics. The 'Open in UserTesting' links navigate to the correct test pages. The loading and error states display correctly.

4

Display session metrics and highlight clips for individual tests

Extend the research dashboard with a test detail view that shows session-level metrics and highlight clips for individual tests. When a team member clicks a test in the dashboard, fetch and display the participant session data and any highlight clips available via the UserTesting API. UserTesting sessions represent individual participant recordings. Each session record from the API typically includes: the participant's completion status for each task, their ratings for the tasks, the total session duration, and metadata about the participant's device and location. Display task completion rates as percentages — if 8 of 10 participants completed a task, show 80% with a green indicator. Display average task durations to identify which tasks take longer than expected. Highlight clips are curated short video segments that researchers mark as significant during test review. The highlights API endpoint returns clip metadata: the title given to the highlight, the timestamp range in the full session, and potentially a thumbnail URL. Displaying highlight titles and linking to them in UserTesting surfaces the most important moments from each test without requiring team members to watch full sessions. For displaying the session data, a simple table view works well: one row per task, with columns for task description, completion rate, average time, and average satisfaction score. Add color coding — green for high completion rates (above 80%), yellow for medium (60-80%), red for low (below 60%). This gives an immediate visual signal of which tasks need design attention. Create a Supabase table to store processed test metrics so the dashboard can render historical data without API calls. After fetching a test's session data, aggregate the metrics and store them in a 'test_metrics' table with columns: test_id, task_id, task_name, completion_rate, avg_duration_seconds, avg_satisfaction, fetched_at. Query this table for dashboard rendering and refresh on demand.

Lovable Prompt

Create a TestDetail component that accepts a test_id prop and fetches session data from the usertesting-proxy Edge Function with resource 'sessions'. Calculate and display: task completion rates as colored progress bars (green > 80%, yellow 60-80%, red < 60%), average time to complete each task, and overall participant satisfaction. Also fetch highlights with resource 'highlights' and display a list of highlight titles with external links to UserTesting.

Paste this in Lovable chat

Pro tip: UserTesting's API may paginate session results — large tests with 50+ participants return sessions in pages. Use the pagination parameters in your Edge Function calls and implement a 'Load more' button in the UI rather than trying to fetch all sessions at once, which can cause Edge Function timeouts.

Expected result: Clicking a test in the dashboard loads the session detail view. Task completion rates appear as colored progress bars showing the percentage of participants who completed each task. Highlight clip titles are displayed with links to open them in UserTesting.

Common use cases

Build a research insights dashboard for the product team

Create a dedicated Research page in your Lovable app that fetches recent UserTesting test results and displays key metrics — participant count, average task success rate, satisfaction scores, and the most-watched highlight clips. This gives the product team a one-stop view of qualitative research findings without leaving the application they are building, making research more likely to influence product decisions.

Lovable Prompt

Create a ResearchDashboard page that fetches the 5 most recent UserTesting test results from a Supabase Edge Function called 'usertesting-proxy'. For each test, display: test name, date, participant count, average satisfaction score (1-5 stars), and a 'View Highlights' button that links to the test's UserTesting URL. Add a summary section at the top showing total tests completed this month and average satisfaction score across all tests. Store fetched results in Supabase for caching.

Copy this prompt to try it in Lovable

Surface specific participant quotes and task metrics for feature decisions

When making a feature decision (should we add this feature? is this UX pattern working?), pull the relevant UserTesting session metrics and representative participant quotes directly into your Lovable app's feature planning view. A Supabase Edge Function fetches session-level task completion data and highlight clip metadata, enabling data-driven feature decisions informed by real user feedback.

Lovable Prompt

Create a FeatureResearch component that accepts a test_id prop and fetches session details from the UserTesting API via the 'usertesting-proxy' Edge Function. Display: task completion rates for each task (percentage who completed successfully), a list of participant quotes (from notes or transcripts if available via API), and the average time to complete each task. Show a loading skeleton while data loads and an error state if the test ID is invalid.

Copy this prompt to try it in Lovable

Track research velocity and coverage across the product roadmap

Build a research tracking dashboard that shows which features and screens have been user-tested, when, and with how many participants. Fetch test list data from UserTesting and map tests to features using tags or test names. Display coverage gaps — features in the roadmap that have not been tested recently — alongside the quantitative metrics from Amplitude or GA4 showing those same features' usage rates.

Lovable Prompt

Create a ResearchCoverage component that fetches all UserTesting tests from the last 90 days via the usertesting-proxy Edge Function. Group tests by feature area using the test name prefix (e.g., tests starting with 'Checkout:' map to the checkout feature). Display a coverage table showing each feature area, the date of the last test, number of participants, and average satisfaction score. Highlight rows where the last test was more than 60 days ago.

Copy this prompt to try it in Lovable

Troubleshooting

UserTesting API calls return 401 Unauthorized from the Edge Function

Cause: The API key format or authentication header does not match what UserTesting expects for your account type. UserTesting uses different authentication schemes (Bearer token, API key header, or OAuth2) depending on the account tier and API version.

Solution: Check UserTesting's API documentation at developers.usertesting.com for the exact authentication format for your account. Test your API key directly using curl: curl -H 'Authorization: Bearer YOUR_KEY' https://app.usertesting.com/api/v1/tests. If this returns 401, the key format may need a different header name — try 'X-API-Token' or contact UserTesting support to confirm the authentication method for your plan.

The research_cache table does not exist causing Edge Function database insert errors

Cause: The research_cache table was not created in Supabase before the Edge Function was deployed.

Solution: Ask Lovable to create the research_cache table: open the chat and request 'Create a Supabase table called research_cache with columns: cache_key (text, primary key), data (jsonb), and fetched_at (timestamptz). Enable RLS with a policy allowing authenticated users to read and service role to write.' Alternatively, create it in Supabase's Table Editor in the Cloud tab.

typescript
1-- Run in Supabase SQL Editor:
2CREATE TABLE IF NOT EXISTS research_cache (
3 cache_key TEXT PRIMARY KEY,
4 data JSONB,
5 fetched_at TIMESTAMPTZ DEFAULT NOW()
6);
7ALTER TABLE research_cache ENABLE ROW LEVEL SECURITY;
8CREATE POLICY "Authenticated users can read cache" ON research_cache FOR SELECT TO authenticated USING (true);

Dashboard shows test list but session data returns empty arrays

Cause: The UserTesting plan does not include API access to session-level data, the tests are in 'Launched' but not 'Complete' status, or the test ID is not being passed correctly to the Edge Function.

Solution: Verify the test status in UserTesting — session data is only available for completed tests. Check the test_id being sent in the Edge Function request body and compare it against the test IDs returned by the tests list endpoint. Contact UserTesting support to confirm whether session API access is included in your plan tier.

Edge Function times out when fetching tests with many participants

Cause: Fetching all sessions for a large test (100+ participants) in a single API call takes too long for the Edge Function's default timeout, especially when combined with database caching operations.

Solution: Implement pagination — fetch only the first page of sessions (20-25 per page) and add a 'Load more' button in the UI. Calculate aggregate metrics from the first page as an estimate, or implement a background job pattern where the Edge Function initiates the fetch and stores results incrementally in Supabase, while the UI polls for completion.

Best practices

  • Cache UserTesting API responses in your Supabase database with a 30-minute TTL — UserTesting test data changes infrequently and caching prevents unnecessary API calls that could trigger rate limits or degrade dashboard performance.
  • Store only the minimum necessary participant data in your Supabase cache — avoid caching raw session records that contain participant PII. Cache only aggregated metrics (completion rates, satisfaction scores) and display individual participant data with direct links to UserTesting.
  • Display 'Last updated X minutes ago' and a manual refresh button in the dashboard — this communicates data freshness to users and gives them control over when to fetch fresh data without excessive automatic polling.
  • Link directly to UserTesting for video playback rather than attempting to embed or download recordings — the UserTesting platform handles video access controls, transcription, and highlights in a way that is difficult to replicate and may violate participant consent agreements.
  • Use the research_cache table to record when each test was last fetched — this enables building a 'stale research' indicator that flags features in your roadmap that have not been user-tested in the past 60 days.
  • Protect the ResearchDashboard page behind authentication with role-based access — user research data may be commercially sensitive and should not be publicly accessible in your deployed Lovable app.

Alternatives

Frequently asked questions

Does UserTesting require a specific plan tier for API access?

Yes. UserTesting's REST API is available on Enterprise and Plus plans — it is not included in the basic Essentials plan. The specific API endpoints available depend on your plan tier. Contact UserTesting's sales or support team to confirm API access for your account, as the platform's plan structure and API availability changes over time.

Can I access UserTesting video recordings through the API?

UserTesting's API provides metadata about sessions and highlight clips — including titles, timestamps, and links — but actual video streaming typically requires authentication through the UserTesting platform rather than direct API download. Use the API to surface highlight metadata and link to UserTesting for playback. Attempting to cache or redistribute session videos may also violate participant consent agreements and UserTesting's terms of service.

How often does UserTesting test data change after a test is complete?

Completed test data is relatively static — participant sessions do not change after completion, though researchers may add highlights and notes. Caching test results for 30-60 minutes is safe for completed tests. For recently launched tests where new responses are still coming in, use a shorter cache TTL or implement a webhook-based refresh if UserTesting supports webhooks in your plan.

What is the difference between UserTesting and FullStory for a Lovable app?

UserTesting recruits specific participants to complete structured tasks in your app on their own devices, then records their screen and voice. You control who participates and what tasks they perform. FullStory records every user session automatically without recruitment, capturing unstructured natural usage behavior. UserTesting gives you directed, structured research with verbalized user thinking. FullStory gives you comprehensive coverage of real user behavior. The best setup for product teams is both: FullStory for continuous monitoring and UserTesting for targeted research on specific questions.

Can I display UserTesting satisfaction scores alongside Amplitude retention data in my Lovable app?

Yes — this is one of the most valuable uses of the integration. Fetch both UserTesting test metrics (via the usertesting-proxy Edge Function) and Amplitude retention data (via the amplitude-track Edge Function) and display them side by side in a unified product health dashboard. Cache both datasets in Supabase for fast rendering. Correlating qualitative satisfaction scores with quantitative retention rates gives product teams a complete picture of user experience quality.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.