Skip to main content
RapidDev - Software Development Agency
lovable-integrationsEdge Function Integration

How to Integrate Lovable with Screaming Frog SEO Spider

Screaming Frog does not have a REST API, so integration with Lovable uses a CSV export workflow: crawl your site in Screaming Frog, export the results as CSV or JSON, upload the file to a Supabase Edge Function endpoint, which parses and stores the data in a Supabase table. Your Lovable frontend then displays the audit issues as a searchable, filterable dashboard with fix-tracking capabilities.

What you'll learn

  • How to export Screaming Frog crawl data in a format suitable for Lovable ingestion
  • How to build a Supabase Edge Function that accepts and parses CSV file uploads
  • How to store and categorize SEO issues in a Supabase database table
  • How to build a filterable issue-tracking dashboard in Lovable with fix status tracking
  • How to compare crawl results across multiple upload sessions to track progress over time
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate13 min read50 minutesMarketingMarch 2026RapidDev Engineering Team
TL;DR

Screaming Frog does not have a REST API, so integration with Lovable uses a CSV export workflow: crawl your site in Screaming Frog, export the results as CSV or JSON, upload the file to a Supabase Edge Function endpoint, which parses and stores the data in a Supabase table. Your Lovable frontend then displays the audit issues as a searchable, filterable dashboard with fix-tracking capabilities.

Turn Screaming Frog Crawl Data into a Tracked SEO Issue Dashboard in Lovable

Screaming Frog SEO Spider is the most widely used desktop tool for technical SEO audits. In a single crawl it identifies broken links (4xx errors), redirect chains, missing or duplicate title tags and meta descriptions, pages blocked by robots.txt, slow-loading pages, hreflang errors, and hundreds of other technical issues. The problem for growing teams is that the results live inside the Screaming Frog desktop application — sharing them requires exporting CSVs, emailing them around, and manually tracking which issues have been fixed.

Building a Lovable dashboard that ingests Screaming Frog exports solves this problem elegantly. The workflow is: run a crawl in Screaming Frog, export the data from the Bulk Export menu, upload the CSV to your Lovable app's upload endpoint, and instantly have a shareable, searchable issue tracker accessible to anyone on your team without needing Screaming Frog installed.

Since Screaming Frog has no API, the integration uses a file upload pattern. A Supabase Edge Function acts as the upload receiver: it accepts a multipart form data POST request, reads the CSV content, parses each row, classifies issues by category (broken links, missing tags, redirects, etc.), and upserts them into a Supabase table. Multiple crawl uploads can be tracked over time so you can see whether your technical SEO health is improving. The Lovable frontend provides filtering by issue type, severity, and fix status, plus the ability to assign issues to team members.

Integration method

Edge Function Integration

Because Screaming Frog is a desktop application with no REST API, the integration pattern uses file uploads instead of direct API calls. You export your crawl data from Screaming Frog as a CSV or JSON file, then upload it to a Supabase Edge Function that parses the file, categorizes issues by type, and stores them in a Supabase database. Your Lovable frontend provides a dashboard to view, filter, and track the resolution of SEO issues over time.

Prerequisites

  • A Lovable project with Lovable Cloud enabled
  • Screaming Frog SEO Spider installed on your computer (free version crawls up to 500 URLs; paid license required for larger sites)
  • A website crawl completed in Screaming Frog with exported CSV data ready to upload
  • Basic understanding of Screaming Frog's export options (Bulk Export menu in the application)

Step-by-step guide

1

Export your Screaming Frog crawl data

Run a crawl of your website in Screaming Frog SEO Spider. Once the crawl is complete, go to the Bulk Export menu at the top of the application and select All Inlinks or choose specific report types. For a comprehensive technical audit, the most useful exports are: Internal HTML pages (contains all page-level meta data including titles, descriptions, H1s, status codes), Response Codes (specifically 4xx and 5xx errors for broken links), Redirects (for redirect chain analysis), and Images (for missing alt text). Export each as a CSV file. All exports will be UTF-8 encoded CSV files with semicolons or commas as separators. The column headers in each export are consistent across crawls, which makes automated parsing reliable. If you want to simplify the ingestion to a single file upload, use the All Export option which produces a comprehensive CSV with all page-level data in one file. Save your exports to a known location on your computer — you will upload them to Lovable in subsequent steps.

Pro tip: Screaming Frog's 'Internal HTML' CSV export is the most valuable single export — it contains URL, page title, meta description, H1, word count, status code, and canonical tag for every crawled page in one file.

Expected result: You have one or more CSV files from Screaming Frog ready to upload, with consistent column headers across exports.

2

Create the Supabase table for SEO issues

Before building the upload endpoint, create the database table that will store the parsed issue data. In Lovable's chat, describe the table structure you need: a table called seo_issues with columns for crawl_id (to group issues by upload session), url (the page URL with the issue), issue_type (a category like 'broken_link', 'missing_title', 'duplicate_content'), status_code (HTTP status code if applicable), page_title, meta_description, h1, is_fixed (boolean, default false), assigned_to (text), notes (text), and created_at. Create a second table called crawl_sessions to store metadata about each upload: id, site_domain, crawled_at, total_pages, issue_count, and health_score. Ask Lovable to create these tables and set up appropriate RLS policies so only authenticated users can insert or update records.

Lovable Prompt

Create two Supabase tables: seo_issues (id uuid primary key, crawl_id uuid, url text, issue_type text, status_code integer, page_title text, meta_description text, h1 text, is_fixed boolean default false, assigned_to text, notes text, created_at timestamptz default now()) and crawl_sessions (id uuid primary key, site_domain text, crawled_at timestamptz, total_pages integer, issue_count integer, health_score numeric, created_at timestamptz default now()). Add RLS policies allowing authenticated users to read, insert, and update both tables.

Paste this in Lovable chat

Pro tip: Add a unique constraint on (crawl_id, url, issue_type) in the seo_issues table to prevent duplicate issue rows when the same URL appears multiple times in a single crawl export.

Expected result: The seo_issues and crawl_sessions tables appear in your Supabase database schema. You can verify them in the Cloud tab → Database section.

3

Create the CSV upload Edge Function

Create a Supabase Edge Function that accepts multipart form data file uploads, parses the Screaming Frog CSV, categorizes issues, and inserts them into the seo_issues table. The function reads the uploaded file as text, splits it by newlines to get rows, extracts the header row to map column names, and processes each data row. Issue categorization logic: rows where Status Code is 4xx become issue_type 'broken_link'; rows with empty Title 1 become 'missing_title'; rows with duplicate Title 1 values become 'duplicate_title'; rows where Meta Description 1 is empty become 'missing_meta'; rows with Meta Description 1 longer than 160 characters become 'long_meta'; rows where H1-1 is empty become 'missing_h1'. The function creates a crawl_session record first, then batch inserts issues. It uses the Supabase service role key to bypass RLS for the insert operations.

Lovable Prompt

Create a Supabase Edge Function at supabase/functions/screaming-frog-upload/index.ts that accepts multipart form data POST requests with a CSV file. Parse the Screaming Frog Internal HTML CSV and insert rows into seo_issues table, categorizing issues as: missing_title (empty Title 1), missing_meta (empty Meta Description 1), broken_link (Status Code 4xx), missing_h1 (empty H1-1), duplicate_title (duplicate Title 1 values in the file). Create a crawl_sessions record first and use its ID as crawl_id for all inserted issues. Return a summary of how many issues were found by category.

Paste this in Lovable chat

supabase/functions/screaming-frog-upload/index.ts
1import { serve } from 'https://deno.land/std@0.168.0/http/server.ts'
2import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'
3
4const corsHeaders = {
5 'Access-Control-Allow-Origin': '*',
6 'Access-Control-Allow-Headers': 'authorization, x-client-info, apikey, content-type',
7}
8
9function parseCsv(text: string): Record<string, string>[] {
10 const lines = text.replace(/\r/g, '').split('\n').filter(l => l.trim())
11 if (lines.length < 2) return []
12 const headers = lines[0].split(',').map(h => h.trim().replace(/^"|"$/g, ''))
13 return lines.slice(1).map(line => {
14 const values = line.split(',').map(v => v.trim().replace(/^"|"$/g, ''))
15 return Object.fromEntries(headers.map((h, i) => [h, values[i] ?? '']))
16 })
17}
18
19serve(async (req) => {
20 if (req.method === 'OPTIONS') return new Response('ok', { headers: corsHeaders })
21
22 try {
23 const supabase = createClient(
24 Deno.env.get('SUPABASE_URL')!,
25 Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!,
26 )
27
28 const formData = await req.formData()
29 const file = formData.get('file') as File
30 const siteDomain = formData.get('site_domain') as string || 'unknown'
31
32 if (!file) return new Response(JSON.stringify({ error: 'No file uploaded' }),
33 { status: 400, headers: { ...corsHeaders, 'Content-Type': 'application/json' } })
34
35 const text = await file.text()
36 const rows = parseCsv(text)
37
38 // Create crawl session
39 const { data: session, error: sessionError } = await supabase
40 .from('crawl_sessions')
41 .insert({ site_domain: siteDomain, crawled_at: new Date().toISOString(), total_pages: rows.length })
42 .select().single()
43
44 if (sessionError) throw new Error(sessionError.message)
45
46 // Categorize issues
47 const seenTitles = new Map<string, number>()
48 rows.forEach(r => {
49 const t = r['Title 1'] || r['title_1'] || ''
50 if (t) seenTitles.set(t, (seenTitles.get(t) || 0) + 1)
51 })
52
53 const issues = rows.flatMap(row => {
54 const url = row['Address'] || row['address'] || row['URL'] || ''
55 const title = row['Title 1'] || row['title_1'] || ''
56 const metaDesc = row['Meta Description 1'] || row['meta_description_1'] || ''
57 const h1 = row['H1-1'] || row['h1_1'] || ''
58 const statusCode = parseInt(row['Status Code'] || row['status_code'] || '200')
59 const found: any[] = []
60 const base = { crawl_id: session.id, url, page_title: title, meta_description: metaDesc, h1 }
61
62 if (statusCode >= 400) found.push({ ...base, issue_type: 'broken_link', status_code: statusCode })
63 if (!title && statusCode < 400) found.push({ ...base, issue_type: 'missing_title', status_code: statusCode })
64 if (!metaDesc && statusCode < 400) found.push({ ...base, issue_type: 'missing_meta', status_code: statusCode })
65 if (!h1 && statusCode < 400) found.push({ ...base, issue_type: 'missing_h1', status_code: statusCode })
66 if (title && (seenTitles.get(title) || 0) > 1 && statusCode < 400) found.push({ ...base, issue_type: 'duplicate_title', status_code: statusCode })
67 return found
68 })
69
70 if (issues.length > 0) await supabase.from('seo_issues').insert(issues)
71
72 // Update session with issue count
73 await supabase.from('crawl_sessions').update({
74 issue_count: issues.length,
75 health_score: Math.round(((rows.length - issues.filter((i,idx,arr) => arr.findIndex(a => a.url === i.url) === idx).length) / Math.max(rows.length, 1)) * 100)
76 }).eq('id', session.id)
77
78 const summary = issues.reduce((acc, i) => { acc[i.issue_type] = (acc[i.issue_type] || 0) + 1; return acc }, {} as Record<string, number>)
79
80 return new Response(JSON.stringify({ success: true, crawl_id: session.id, total_issues: issues.length, summary }),
81 { headers: { ...corsHeaders, 'Content-Type': 'application/json' } })
82 } catch (error) {
83 return new Response(JSON.stringify({ error: error.message }),
84 { status: 500, headers: { ...corsHeaders, 'Content-Type': 'application/json' } })
85 }
86})

Pro tip: Screaming Frog CSV headers vary slightly by export type and version. Log the first row of the parsed CSV during development to verify your column name mappings match the actual export headers.

Expected result: The screaming-frog-upload Edge Function accepts CSV file uploads and returns a JSON summary of issues found by category, with all issues stored in the seo_issues Supabase table.

4

Build the issue tracking dashboard

Build the main dashboard UI in Lovable. The dashboard should have three main sections: an upload section at the top where users can drag-and-drop or click to upload a Screaming Frog CSV, a summary section showing issue counts by type with visual indicators (use shadcn/ui Card components with icons for each issue type), and a data table showing all issues with filtering by issue_type and is_fixed status. The table should include columns for URL, issue type (displayed as a colored badge), the relevant content (title, meta description, or H1 depending on issue type), and action buttons to mark as fixed or add notes. Include a crawl session selector so users can view historical crawls and compare issue counts over time. The frontend should call the upload Edge Function with a multipart POST and handle the file upload progress.

Lovable Prompt

Create a technical SEO audit dashboard with three sections: (1) a CSV file upload area that POSTs to /functions/v1/screaming-frog-upload and shows an upload progress bar, (2) summary cards for each issue type (missing_title, missing_meta, missing_h1, broken_link, duplicate_title) with counts from seo_issues table, (3) a filterable table of all issues from seo_issues with columns for URL, issue type badge, page title, and a 'Mark Fixed' toggle button that updates is_fixed in Supabase. Add a crawl session dropdown to filter by crawl.

Paste this in Lovable chat

Pro tip: For large sites with thousands of issues, add server-side pagination to the issues table query (use Supabase's .range() method) rather than fetching all rows at once.

Expected result: A complete SEO audit dashboard where team members can upload Screaming Frog exports, view categorized issues, mark them as fixed, and compare results across crawl sessions.

Common use cases

Weekly technical SEO audit tracker

Run a weekly Screaming Frog crawl, export the results, and upload them to your Lovable dashboard. Track which issues are new this week versus already known, which have been marked as fixed, and overall issue count trends over time.

Lovable Prompt

Create a technical SEO audit dashboard with an upload button for Screaming Frog CSV exports. When uploaded, show a summary of issues by category (broken links, missing titles, duplicate content, redirect chains, slow pages). Display a data table with URL, issue type, HTTP status code, and a 'Mark as Fixed' button for each row. Show a trend chart of total issues over time across multiple uploads.

Copy this prompt to try it in Lovable

Broken link report for content teams

Filter the Screaming Frog data to show only 404 errors and broken internal links, then let content editors mark each one as fixed or assign it to a team member. This replaces the manual process of distributing broken link reports via email or spreadsheet.

Lovable Prompt

Build a broken links tracker page that reads from our Supabase seo_issues table and filters to status_code = 404. Show a table with the broken URL, the page that links to it, and buttons to mark as fixed or assign to a team member. Add a count badge showing how many broken links remain unfixed.

Copy this prompt to try it in Lovable

SEO health score dashboard

Calculate a simple SEO health score based on the ratio of pages with issues to total pages crawled. Display a trend line of health scores over time as new crawls are uploaded, giving stakeholders a single number that summarizes technical SEO quality.

Lovable Prompt

Add a health score component to our SEO dashboard that calculates (pages without critical issues / total pages) * 100 from our seo_issues Supabase table. Show the current score as a large number with a color indicator (green above 90, yellow 70-90, red below 70) and a line chart showing score trends across the last 10 crawl uploads.

Copy this prompt to try it in Lovable

Troubleshooting

The Edge Function returns an empty issues array even though the CSV has data

Cause: The CSV column name mapping does not match the actual headers in the exported file. Screaming Frog uses slightly different column names depending on the export type (Internal HTML vs Response Codes vs bulk exports).

Solution: Add a debug log in the Edge Function to print the parsed header row: console.log('CSV headers:', rows[0] ? Object.keys(rows[0]) : []). Check Cloud → Logs to see the actual column names. Update the column name references in the issue categorization logic to match the actual export headers.

typescript
1// Debug: log first row keys
2console.log('CSV headers:', rows[0] ? Object.keys(rows[0]).join(', ') : 'no rows')

File upload fails with a 413 error or timeout

Cause: Supabase Edge Functions have a default request body limit. Large Screaming Frog exports from sites with thousands of pages may exceed this limit.

Solution: For very large sites, split the Screaming Frog export into multiple smaller CSV files by filtering in Screaming Frog before exporting (for example, export only 4xx responses separately from HTML pages). Alternatively, compress the CSV before uploading. Edge Function response timeouts can also occur on very large files — process rows in batches of 500 and use Promise.all for batch inserts.

typescript
1// Batch insert in chunks of 500
2const chunkSize = 500
3for (let i = 0; i < issues.length; i += chunkSize) {
4 const chunk = issues.slice(i, i + chunkSize)
5 await supabase.from('seo_issues').insert(chunk)
6}

Duplicate issues appear in the database when the same CSV is uploaded multiple times

Cause: Each upload creates a new crawl_session and inserts new issue rows, so re-uploading the same file produces duplicate issues under different crawl IDs.

Solution: This is expected behavior — each upload represents a distinct crawl session for historical tracking. To avoid accidental duplicates, add a confirmation dialog before upload asking 'This will create a new crawl session. Continue?' Add the crawl_id to the issues table display so users can identify which session each issue belongs to.

Best practices

  • Export the 'Internal HTML' report from Screaming Frog for the most comprehensive single-file audit — it contains all page-level metadata in one export
  • Add a site_domain field to each upload so the dashboard can support multi-site audits with a domain filter
  • Store crawl session timestamps and page counts in the crawl_sessions table to track improvements over time
  • Set RLS policies on the seo_issues and crawl_sessions tables so only authenticated workspace members can view and edit issues
  • Implement pagination on the issues table for sites with thousands of pages — loading all rows at once will cause slow dashboard performance
  • Use Screaming Frog's Scheduling feature (paid version) to export crawls automatically and consider setting up a folder-watch script to auto-upload new exports
  • Create a unique constraint on (crawl_id, url, issue_type) to prevent duplicate rows from edge cases in CSV parsing
  • Add a 'notes' field to issue records so developers can document the fix applied — this creates a valuable audit trail for recurring technical SEO issues

Alternatives

Frequently asked questions

Why doesn't Screaming Frog have a REST API?

Screaming Frog is primarily a desktop application rather than a cloud service. Its architecture is optimized for running powerful crawls directly on your local machine or server, not for serving API requests. Screaming Frog does offer a command-line mode and Google Sheets integration, but there is no public REST API for fetching crawl results programmatically.

Can I automate the Screaming Frog export and upload process?

Yes, with some additional setup. Screaming Frog's paid version supports scheduled crawls and automatic CSV exports via its Scheduling feature. You could then use a simple script (running on your machine or a server) to detect new export files and POST them to your Lovable Edge Function endpoint automatically. This creates a fully automated pipeline without needing anyone to manually upload files.

How many pages can the free version of Screaming Frog crawl?

The free version of Screaming Frog is limited to crawling 500 URLs. For larger sites, you need the paid license at £199/year (approximately $250/year). The paid version has no crawl limit and adds features like scheduled crawls, Google Analytics integration, and Search Console data overlay.

Can I track which team member fixed which SEO issue?

Yes. Add an assigned_to field and a fixed_by field to the seo_issues table. When a team member clicks 'Mark as Fixed', update is_fixed to true and set fixed_by to the current user's name or email from Supabase Auth. This creates an audit trail of who resolved each issue and when it was fixed.

What is the best Screaming Frog export format to use with this integration?

The Internal HTML export (from Bulk Export → All Inlinks or specific to Internal HTML filter) is the best starting point because it contains all page-level metadata in one file with consistent column headers. If you also need broken external links, export Response Codes 4xx separately. Screaming Frog can also export to XML Sitemap format and Google Sheets, but CSV is the most reliable format for programmatic parsing.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.