To integrate Replit with UserTesting, store your UserTesting API key in Replit Secrets, then call the UserTesting API from your server-side code to retrieve test results, contributor responses, and study metrics. The API uses simple API key authentication, making it straightforward to pull UX research data into dashboards, reports, or automated analysis pipelines built in Replit.
Automate UX Research Reporting with UserTesting and Replit
UserTesting generates rich qualitative and quantitative data about how real users interact with your product. The platform provides video recordings, task completion rates, error rates, and written feedback for every test session. However, making sense of this data across multiple studies, tracking patterns over time, or integrating it into broader product analytics workflows typically requires manual export and analysis work.
The UserTesting API gives developers programmatic access to this research data. From Replit, you can build scripts and services that automatically pull completed test sessions, calculate aggregate metrics (average task completion time, error rates by task), filter sessions by specific criteria, and push insights to wherever your team works β a Slack channel, a Notion database, or a Google Sheet dashboard.
Replit is a natural fit for UserTesting integrations because these workflows are typically lightweight data pipelines rather than high-traffic services. A Replit script that runs on a schedule (or is triggered manually) can pull the latest test results, process them, and deliver a formatted report to your product team β saving hours of manual data wrangling. This tutorial covers authentication, fetching test and session data, building a simple reporting script, and setting up a scheduled report delivery.
Integration method
The UserTesting API allows authenticated access to test results, sessions, and contributor data using an API key. Your Replit backend stores the API key in Secrets, calls the API endpoints to retrieve study data, and can aggregate, filter, and export UX research results for analysis, reporting, or integration with other tools like Slack or your internal dashboard.
Prerequisites
- A UserTesting account with API access β typically available on Enterprise plans. Contact UserTesting support to confirm API access is enabled for your account.
- Your UserTesting API key (from account settings or provided by your UserTesting account manager)
- A Replit account β free tier works for scripts; Replit Core recommended for deployed services
- Basic Python (requests) or Node.js (fetch) knowledge for calling REST APIs
- At least one completed UserTesting study with sessions to retrieve
Step-by-step guide
Obtain Your UserTesting API Key
Obtain Your UserTesting API Key
The UserTesting API requires an API key for authentication. API access is generally available on UserTesting Enterprise plans β if you are on a lower tier, contact UserTesting support to request API access or ask your account manager. To find your API key, log in to your UserTesting account and navigate to Account Settings (your profile icon in the top-right, then Settings). Look for an API or Developer section within your account settings. If API access is enabled, your API key is displayed there. Copy it carefully. The UserTesting API uses this key as a Bearer token in the Authorization header of every request. Unlike some other platforms, UserTesting does not use OAuth flows for API access β the static API key is all you need. This makes integration simpler but also means you should treat the API key as a sensitive secret. UserTesting's API documentation is available at their developer portal. Before building your integration, review the available endpoints to confirm the data you need (test results, session videos, contributor info) is accessible via API. Note that video content itself (the screen recording files) is typically not directly downloadable via API β you get metadata and transcript data, with links to view videos in the UserTesting platform.
Pro tip: If you cannot find an API section in your UserTesting account settings, your plan may not include API access. Contact your UserTesting account manager β Enterprise plan customers typically get API access included.
Expected result: You have your UserTesting API key copied and ready to store in Replit Secrets.
Store the API Key in Replit Secrets
Store the API Key in Replit Secrets
Open your Replit project and click the lock icon (π) in the left sidebar to open the Secrets panel. Click + New Secret and add USERTESTING_API_KEY with your API key value. In Python, access it with os.environ['USERTESTING_API_KEY']. In Node.js, use process.env.USERTESTING_API_KEY. Always read credentials from environment variables β never hardcode an API key directly in your source code, as this creates a security risk if your Repl is made public. Restart your Repl after adding secrets to ensure the new environment variable is loaded. You can verify it is accessible by printing its length (not the value itself): print(f'API key loaded: {len(os.environ["USERTESTING_API_KEY"])} chars'). This confirms the secret is available without exposing the key value in your console output. Test connectivity by making a simple GET request to the UserTesting API with your key. The accounts endpoint or a listing of your tests is a good first call to verify authentication is working. A 200 response confirms the key is valid; a 401 means the key is wrong or API access is not enabled on your account.
1import os2import requests34API_KEY = os.environ['USERTESTING_API_KEY']5BASE_URL = 'https://api.usertesting.com/v1'67headers = {8 'Authorization': f'Bearer {API_KEY}',9 'Accept': 'application/json',10 'Content-Type': 'application/json'11}1213# Test: list your UserTesting studies14response = requests.get(f'{BASE_URL}/tests', headers=headers, params={'per_page': 5})1516if response.status_code == 200:17 data = response.json()18 tests = data.get('tests', [])19 print(f'API key valid. Found {len(tests)} recent studies:')20 for test in tests:21 print(f' - [{test.get("id")}] {test.get("name")} β Status: {test.get("status")}')22elif response.status_code == 401:23 print('Error: API key invalid or API access not enabled on your account.')24else:25 print(f'Error: {response.status_code} β {response.text}')Expected result: The test script prints 'API key valid' and lists your recent UserTesting studies with their IDs, names, and statuses β confirming the API connection is working.
Fetch Test Sessions and Results
Fetch Test Sessions and Results
Once authentication is working, you can retrieve detailed session data for your studies. A typical workflow fetches the list of studies, then for each study fetches the individual sessions, and for each session retrieves the tasks and metrics. The UserTesting API follows a hierarchical structure: Tests contain Sessions (one per participant), and each Session contains Tasks (the individual tasks participants completed). Metrics like completion rate and time-on-task are at the Task level. The script below retrieves a specific test by ID, fetches all completed sessions, and calculates aggregate metrics. Replace TEST_ID with your actual study ID from the previous step's output. The metrics calculation gives you the kind of summary data you might include in a stakeholder report: average completion rate per task, average time per task, and overall test sentiment if available.
1import os2import requests3import statistics45API_KEY = os.environ['USERTESTING_API_KEY']6BASE_URL = 'https://api.usertesting.com/v1'7TEST_ID = os.environ.get('USERTESTING_TEST_ID', 'YOUR_TEST_ID_HERE')89headers = {10 'Authorization': f'Bearer {API_KEY}',11 'Accept': 'application/json'12}1314def get_test_sessions(test_id):15 """Fetch all sessions for a test."""16 sessions = []17 page = 118 while True:19 resp = requests.get(20 f'{BASE_URL}/tests/{test_id}/sessions',21 headers=headers,22 params={'page': page, 'per_page': 50, 'status': 'complete'}23 )24 resp.raise_for_status()25 data = resp.json()26 page_sessions = data.get('sessions', [])27 sessions.extend(page_sessions)28 if len(page_sessions) < 50:29 break # no more pages30 page += 131 return sessions3233def get_session_tasks(test_id, session_id):34 """Fetch task results for a session."""35 resp = requests.get(36 f'{BASE_URL}/tests/{test_id}/sessions/{session_id}/tasks',37 headers=headers38 )39 resp.raise_for_status()40 return resp.json().get('tasks', [])4142def summarize_test(test_id):43 print(f'Fetching sessions for test {test_id}...')44 sessions = get_test_sessions(test_id)45 print(f'Found {len(sessions)} completed sessions')4647 task_completions = {}48 task_durations = {}4950 for session in sessions[:10]: # sample first 10 for demo51 tasks = get_session_tasks(test_id, session['id'])52 for task in tasks:53 task_title = task.get('title', 'Unknown Task')54 completed = task.get('completed', False)55 duration = task.get('duration_seconds', 0)5657 if task_title not in task_completions:58 task_completions[task_title] = []59 task_durations[task_title] = []6061 task_completions[task_title].append(1 if completed else 0)62 if duration > 0:63 task_durations[task_title].append(duration)6465 print('\nTask Summary:')66 print('-' * 60)67 for task_title in task_completions:68 completions = task_completions[task_title]69 durations = task_durations[task_title]70 completion_rate = sum(completions) / len(completions) * 10071 avg_duration = statistics.mean(durations) if durations else 072 print(f'Task: {task_title[:40]}')73 print(f' Completion rate: {completion_rate:.0f}%')74 print(f' Avg duration: {avg_duration:.0f}s')75 print()7677summarize_test(TEST_ID)Pro tip: UserTesting API responses are paginated. Always implement pagination in your session fetching code β studies with many participants will have results spread across multiple pages (50 sessions per page by default).
Expected result: The script fetches sessions for your specified study and prints a task summary table showing completion rates and average durations for each task in the test.
Build an Automated Report Delivery Service
Build an Automated Report Delivery Service
With data fetching working, build a service that automatically delivers UX research summaries on a schedule or when triggered. The example below is a Flask server with two endpoints: GET /report/{test_id} to generate a report on demand (useful for team members who want to check status without logging in to UserTesting), and POST /webhook to receive a trigger (e.g., from a cron job or a Zapier automation) to generate and send a report to Slack. The Slack notification uses an incoming webhook URL (which you configure in your Slack workspace). Store the Slack webhook URL in Replit Secrets as SLACK_WEBHOOK_URL. The report formatter generates a structured message with emojis and formatting that renders nicely in Slack. For a daily scheduled report, configure a simple cron trigger. Replit does not have a built-in cron scheduler, but you can use a free external service like cron-job.org to send a POST request to your /webhook endpoint at 9am every weekday. This triggers your Replit server to fetch fresh UserTesting data and post the report to Slack.
1// Node.js Express report delivery service2const express = require('express');3const app = express();4app.use(express.json());56const API_KEY = process.env.USERTESTING_API_KEY;7const SLACK_WEBHOOK_URL = process.env.SLACK_WEBHOOK_URL;8const BASE_URL = 'https://api.usertesting.com/v1';910const headers = {11 'Authorization': `Bearer ${API_KEY}`,12 'Accept': 'application/json'13};1415async function fetchTests() {16 const res = await fetch(`${BASE_URL}/tests?status=active&per_page=10`, { headers });17 const data = await res.json();18 return data.tests || [];19}2021async function fetchSessionCount(testId) {22 const res = await fetch(`${BASE_URL}/tests/${testId}/sessions?status=complete&per_page=1`, { headers });23 const data = await res.json();24 return data.total_count || 0;25}2627async function buildReport() {28 const tests = await fetchTests();29 const lines = ['*UserTesting Weekly Summary*', ''];3031 for (const test of tests) {32 const completed = await fetchSessionCount(test.id);33 lines.push(`*${test.name}*`);34 lines.push(` Sessions: ${completed} | Status: ${test.status}`);35 lines.push('');36 }37 lines.push(`_Report generated at ${new Date().toLocaleString()}_`);38 return lines.join('\n');39}4041app.get('/report', async (req, res) => {42 try {43 const report = await buildReport();44 res.type('text/plain').send(report);45 } catch (err) {46 res.status(500).json({ error: err.message });47 }48});4950app.post('/send-report', async (req, res) => {51 try {52 const report = await buildReport();53 if (SLACK_WEBHOOK_URL) {54 await fetch(SLACK_WEBHOOK_URL, {55 method: 'POST',56 headers: { 'Content-Type': 'application/json' },57 body: JSON.stringify({ text: report })58 });59 }60 res.json({ sent: true, lines: report.split('\n').length });61 } catch (err) {62 res.status(500).json({ error: err.message });63 }64});6566app.listen(3000, '0.0.0.0', () => console.log('UserTesting report server running'));Pro tip: Use a free external cron service like cron-job.org to POST to your Replit /send-report endpoint on a schedule. This gives your Replit server a triggered schedule without needing any cron configuration inside Replit.
Expected result: The server starts and the /report endpoint returns a plain-text summary of active UserTesting studies. Calling /send-report delivers a formatted message to your Slack channel.
Common use cases
Weekly UX Research Summary Report
A scheduled Replit script runs every Monday morning, fetches all UserTesting sessions completed in the previous week, calculates aggregate metrics (average task completion rate, top-mentioned issues from notes), and posts a formatted summary to a Slack channel. The product team gets a concise weekly UX briefing without anyone manually compiling data.
Build a Python script that calls the UserTesting API to fetch all test sessions from the past 7 days, calculates the average task completion rate and common themes from session notes, and posts a formatted weekly summary to a Slack webhook.
Copy this prompt to try it in Replit
Test Completion Notifier
When a UserTesting study receives new completed sessions, your Replit server polls the API periodically and sends a Slack notification to the research team when the session count crosses a threshold (e.g., 'Your study now has 8/10 sessions completed β 2 more to go'). This keeps the team informed without having to log in to UserTesting to check progress.
Create a Node.js script that polls the UserTesting API every hour for session count changes on active studies, and sends a Slack notification when a study crosses 50%, 80%, and 100% completion milestones.
Copy this prompt to try it in Replit
Multi-Study Metrics Dashboard
Your product team runs multiple UserTesting studies simultaneously for different features. Your Replit server fetches data from all active studies, calculates key metrics for each (sessions completed, average ratings, task success rates), and renders a simple HTML dashboard showing all studies at a glance. This replaces navigating through multiple UserTesting study pages.
Build a Flask web app that calls the UserTesting API to fetch all active studies and their session metrics, then renders an HTML dashboard with a table showing study name, sessions completed, average rating, and a link to each study.
Copy this prompt to try it in Replit
Troubleshooting
401 Unauthorized error on every API request
Cause: The API key is invalid, the Authorization header format is incorrect, or API access is not enabled on your UserTesting account plan.
Solution: Verify the API key value in Replit Secrets matches exactly what UserTesting shows in your account settings. Ensure the Authorization header is formatted as 'Bearer YOUR_API_KEY' with no extra spaces or line breaks. Contact UserTesting support if you cannot find an API key in your account settings β API access may need to be enabled separately.
1# Debug: print the exact authorization header being sent (never print the actual key in production)2headers = {'Authorization': f'Bearer {API_KEY}'}3print('Header key present:', 'Authorization' in headers)4print('Header length:', len(headers['Authorization']))API returns empty sessions list even though completed sessions exist in UserTesting
Cause: The sessions endpoint filters by status by default. If you do not specify status=complete, you may be retrieving pending sessions, or the pagination is cutting off results.
Solution: Add status=complete as a query parameter to your sessions request. Also check that you are using the correct test ID β verify it matches the study ID visible in the UserTesting dashboard URL. Implement pagination to retrieve all sessions if the study has more than 50 participants.
1# Always filter for completed sessions2params = {3 'status': 'complete',4 'page': 1,5 'per_page': 506}7resp = requests.get(f'{BASE_URL}/tests/{test_id}/sessions', headers=headers, params=params)Rate limit error or 429 Too Many Requests response
Cause: The UserTesting API has rate limits per API key. Scripts that loop through many tests and sessions rapidly, or that poll frequently, can hit these limits.
Solution: Add a small delay between API requests using time.sleep(0.5) in Python or a setTimeout delay in Node.js. Cache responses where possible β if you are building a dashboard, cache the test list for 5-10 minutes instead of fetching it on every page load. Check UserTesting's API documentation for the specific rate limit (typically 60-120 requests per minute).
1import time23for session in sessions:4 tasks = get_session_tasks(test_id, session['id'])5 process_tasks(tasks)6 time.sleep(0.3) # 300ms delay between requests to stay within rate limitsKeyError or missing data fields when accessing session or task attributes
Cause: The UserTesting API response structure can vary depending on the test type (moderated vs unmoderated, task-based vs free-roam). Some fields are only present for certain test configurations.
Solution: Always use .get() with a default value when accessing dictionary fields from API responses. Log the full session or task object to see what fields are actually present for your test type before writing parsing code that assumes specific fields exist.
1# Safe field access with defaults2task_title = task.get('title', 'Unnamed Task')3completed = task.get('completed', False)4duration = task.get('duration_seconds') or task.get('duration') or 05rating = task.get('rating') or task.get('score') or NoneBest practices
- Store your UserTesting API key in Replit Secrets (lock icon π) β never hardcode it in source files or commit it to version control.
- Implement pagination in all list endpoint calls β UserTesting studies with many participants will have sessions spread across multiple pages at 50 sessions per page.
- Add rate limiting delays (0.3-0.5 seconds) between API calls when fetching data for multiple studies to avoid hitting UserTesting's request rate limits.
- Cache API responses for dashboard use cases β test metadata and aggregate metrics do not need to be re-fetched on every page load; a 5-10 minute cache is sufficient.
- Use the status=complete filter when fetching sessions to only process sessions where participants have finished all tasks, avoiding incomplete or abandoned sessions.
- Log all API requests and responses during development to debug data structure issues, then remove verbose logging for production deployments.
- Add USERTESTING_TEST_ID to Replit Secrets when your scripts focus on a specific study, making it easy to switch between studies without code changes.
- Deploy report delivery services on Replit Autoscale and trigger them via an external cron job (cron-job.org) rather than building scheduling logic inside the Repl.
Alternatives
SurveyMonkey is better for structured questionnaire-based research with a more accessible API, while UserTesting focuses on recorded task-based sessions that are harder to analyze programmatically.
Typeform offers conversational surveys with a developer-friendly API and webhook support, making it easier to integrate into automated data pipelines compared to UserTesting's session-focused data model.
Miro is better for collaborative UX research synthesis (affinity diagrams, journey maps) rather than conducting and collecting user test sessions.
Frequently asked questions
How do I connect Replit to UserTesting?
Add your UserTesting API key to Replit Secrets (lock icon π in the sidebar) as USERTESTING_API_KEY, then use it as a Bearer token in the Authorization header when calling the UserTesting API. The API base URL is https://api.usertesting.com/v1. API access is typically available on UserTesting Enterprise plans β contact support to enable it.
Does Replit work with UserTesting?
Yes. Replit can call the UserTesting REST API using standard HTTP requests from Python (requests library) or Node.js (fetch). Your Replit server fetches test results, session data, and task metrics, then processes them for dashboards, reports, or automated notifications. Deploy on Replit Autoscale for a permanently available reporting service.
Can I download UserTesting video recordings via the API?
Video recordings (the actual screen recording files) are typically accessible only through UserTesting's platform interface and are not directly downloadable via the API. The API provides metadata, transcripts, task completion data, and session notes. Check the UserTesting API documentation for your specific plan's video access capabilities.
How do I schedule automated UserTesting reports from Replit?
Replit does not have a built-in cron scheduler, but you can use a free external service like cron-job.org to send a POST request to your deployed Replit server endpoint at your desired schedule (e.g., 9am Monday). Your Replit server receives the trigger, fetches fresh UserTesting data, and sends the report to Slack or email.
What UserTesting plan do I need for API access?
UserTesting API access is generally available on Enterprise plans. Starter and Professional plans may not include API access. If you need API access, contact your UserTesting account manager or sales team. For smaller teams, the UserTesting platform's built-in export features (CSV, highlight reels) may be sufficient without needing API integration.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation