To use Firecrawl Skills in OpenClaw, run `clawhub install firecrawl-skills`, set your FIRECRAWL_API_KEY in OpenClaw's config, and start crawling entire websites through chat. Unlike single-search tools, Firecrawl Skills gives you a full web crawling and mapping toolkit — scrape multiple pages, extract structured data, and map site architecture all from one installed skill.
Why Use Firecrawl Skills Instead of a Single-Query Scraper?
Most web scraping tools are built around single-page or single-query operations — you give them a URL, they return content from that page. Firecrawl Skills takes a fundamentally different approach: it treats a website as a structured document collection that can be systematically crawled, mapped, and extracted in bulk. When you need data from a product catalog with 200 pages, a documentation site with hundreds of articles, or a news site's full archive, single-query tools require 200 separate calls. Firecrawl Skills handles it in one prompt.
The distinction between firecrawl-skills and firecrawl-search is important to understand before choosing which to install. The firecrawl-search skill is optimized for search-style queries — you describe what you're looking for and it returns relevant results from across the web. Firecrawl Skills, by contrast, is a targeted crawling toolkit for sites you already know you want to extract data from. Its three core capabilities are: scraping (extracting clean content from a given URL), crawling (following links recursively across an entire site), and mapping (returning the full URL structure of a domain without fetching page content). These three modes make it the most versatile data extraction skill available in ClawHub for comprehensive web data pipelines.
Practical use cases include competitive research (crawl a competitor's pricing pages to track changes), documentation processing (extract all articles from a docs site for embedding in a knowledge base), content auditing (map all URLs on your own site to find orphaned pages), and data pipeline building (bulk-extract structured product, article, or listing data). Because firecrawl-skills runs inside OpenClaw, your crawl jobs are expressed as natural language prompts and results are returned directly in chat — no separate scraping infrastructure to maintain.
Integration method
Firecrawl Skills integrates with OpenClaw through ClawHub — OpenClaw's built-in skill registry. Once installed, the skill connects to the Firecrawl API using your FIRECRAWL_API_KEY and exposes crawling, scraping, and mapping capabilities directly inside OpenClaw chat. You do not need to write code or set up any server — the skill handles all API communication internally. This differs from firecrawl-search, which is optimized for single search queries; firecrawl-skills covers multi-page crawls, full-site mapping, and batch extraction workflows.
Prerequisites
- An OpenClaw account with ClawHub access
- A Firecrawl account — sign up at firecrawl.dev to get your API key
- ClawHub CLI installed (comes with OpenClaw — run `clawhub --version` to verify)
- Basic familiarity with OpenClaw chat prompts
- Understanding of which websites you have permission to crawl (respect robots.txt)
Step-by-step guide
Install the firecrawl-skills ClawHub Skill
Install the firecrawl-skills ClawHub Skill
Open your terminal and run the ClawHub install command to add Firecrawl Skills to your OpenClaw installation. ClawHub is OpenClaw's built-in skill registry — think of it as an app store for OpenClaw capabilities. Installing a skill downloads it and registers it with your OpenClaw instance so it becomes available in chat. The install command is straightforward: ``` clawhub install firecrawl-skills ``` This fetches the latest stable version of the firecrawl-skills package and adds it to your OpenClaw skills directory. The installation typically completes in under 30 seconds. After installation, verify it appears in your skills list by running: ``` clawhub list ``` You should see `firecrawl-skills` listed with its version number. If you already have `firecrawl-search` installed, note that these are two distinct skills — firecrawl-skills provides the crawl, scrape, and map toolkit while firecrawl-search handles search-query style interactions. Both use the same API key, so you only need to configure credentials once if you install both. Do not confuse the two: firecrawl-skills will be the tool of choice when you need to extract data from known URLs across multiple pages, while firecrawl-search is better for discovery queries where you do not know the specific URLs ahead of time.
1# Install the Firecrawl Skills toolkit2clawhub install firecrawl-skills34# Confirm installation5clawhub list | grep firecrawlPro tip: If you see a version conflict error during install, run `clawhub update` first to refresh the registry index, then retry the install command.
Expected result: The firecrawl-skills skill appears in your `clawhub list` output with a version number and 'active' status.
Get Your Firecrawl API Key
Get Your Firecrawl API Key
Firecrawl Skills requires a valid Firecrawl API key to make requests to the Firecrawl service. The API key authenticates your OpenClaw instance with Firecrawl's servers and determines which plan limits apply to your crawls. To get your API key: 1. Go to firecrawl.dev and sign up for an account (a free tier is available). 2. After signing in, navigate to your dashboard at app.firecrawl.dev. 3. Find the 'API Keys' section — it's typically in the sidebar under your account settings. 4. Click 'Create API Key' or 'Generate New Key'. Give it a descriptive name like 'OpenClaw Integration'. 5. Copy the key immediately — it starts with `fc-` followed by a long alphanumeric string. Firecrawl's free tier allows a limited number of pages per month, which is sufficient for testing and small projects. For larger crawls (hundreds or thousands of pages), check Firecrawl's paid plans on their pricing page. Keep your API key secure — it controls your Firecrawl account usage and billing. Do not share it or commit it to version control. You will store it in OpenClaw's config in the next step, not in any code file.
Pro tip: Firecrawl's free tier is generous enough for exploring the skill and running test crawls. Set up billing alerts on your Firecrawl dashboard before running large multi-page crawls to avoid surprise charges.
Expected result: You have a Firecrawl API key starting with `fc-` ready to configure in OpenClaw.
Configure Your FIRECRAWL_API_KEY in OpenClaw
Configure Your FIRECRAWL_API_KEY in OpenClaw
With your API key in hand, you need to add it to OpenClaw's configuration so the firecrawl-skills skill can authenticate with Firecrawl's API. OpenClaw stores skill credentials in its config file or a .env file in your OpenClaw directory — not in any application code. There are two ways to configure the API key: **Option 1: Using the OpenClaw config command (recommended)** Run the following in your terminal: ``` openclaw config set skills.firecrawl-skills.apikey YOUR_API_KEY_HERE ``` This writes the key to OpenClaw's encrypted config store, which is the safest approach. **Option 2: Using a .env file in your OpenClaw directory** If you prefer environment variable style configuration, open (or create) the `.env` file in your OpenClaw data directory and add: ``` FIRECRAWL_API_KEY=fc-your-key-here ``` After saving, restart OpenClaw or run `openclaw reload` for the change to take effect. To verify the key was loaded correctly, run a quick config check: ``` openclaw config get skills.firecrawl-skills.apikey ``` This should return the masked version of your key (e.g., `fc-****...****`). If you are setting up OpenClaw for a team, the config command approach is preferable — keys stored via `openclaw config set` are encrypted at rest and are not exposed in file listings. The .env approach works well for local development but should not be committed to version control.
1# Option 1: Store via OpenClaw config (recommended)2openclaw config set skills.firecrawl-skills.apikey YOUR_API_KEY_HERE34# Option 2: Store in .env file5echo 'FIRECRAWL_API_KEY=fc-your-key-here' >> ~/.openclaw/.env67# Verify key is loaded8openclaw config get skills.firecrawl-skills.apikeyPro tip: If you have both firecrawl-skills and firecrawl-search installed, they share the same API key. Setting it once via `openclaw config set skills.firecrawl-skills.apikey` covers both skills — you do not need to configure it separately for each.
Expected result: Running `openclaw config get skills.firecrawl-skills.apikey` shows a masked version of your key, confirming it is stored correctly.
Test Firecrawl Skills with Your First Crawl Prompt
Test Firecrawl Skills with Your First Crawl Prompt
With the skill installed and configured, open OpenClaw chat and try your first crawl. The firecrawl-skills skill responds to natural language instructions — you describe what you want to crawl and what data to extract, and it handles the Firecrawl API calls behind the scenes. Start with a single-page scrape to confirm the skill is working: After confirming single-page scraping works, try a multi-page crawl. Note that crawls take longer than single-page scrapes — a 50-page crawl might take 30-60 seconds depending on page load times. OpenClaw will stream progress updates as pages are crawled. You can also use the mapping mode to understand a site's structure without downloading all content: The site map returns a list of all discovered URLs, which you can then use to decide which specific pages to scrape. If the skill responds with an authentication error, go back to Step 3 and verify your API key configuration. If it times out on large crawls, add a page limit to your prompt (e.g., 'limit to 30 pages').
Scrape the content from https://news.ycombinator.com and return the titles and URLs of all stories on the front page.
Paste this in OpenClaw chat
Pro tip: For your first test, use a site you know well so you can easily verify whether the extracted content is correct and complete.
Expected result: OpenClaw returns a structured list of extracted content from the target URL, confirming the firecrawl-skills skill is active and authenticating successfully with Firecrawl's API.
Advanced Usage: Bulk Extraction and Structured Output
Advanced Usage: Bulk Extraction and Structured Output
Once basic crawling is working, Firecrawl Skills supports more sophisticated extraction patterns that are particularly useful for data pipelines and research workflows. **Structured data extraction**: Ask for data in a specific format by describing the fields you want. Firecrawl's AI-powered extraction can identify and pull specific data points even from unstructured pages. **Depth-controlled crawling**: By default, crawls follow links to a configurable depth. Specify depth in your prompt to control how far from the starting URL the crawl goes — depth 1 means the start page plus all directly linked pages, depth 2 adds pages linked from those, and so on. **Domain restriction**: Tell the skill to stay within a specific subdomain or path prefix to avoid crawling the entire internet from a single starting link. **Combining map + scrape**: A powerful workflow is to first map a site to get all URLs, review the list, and then selectively scrape only the relevant pages. This avoids wasting API credits on irrelevant pages. For power users managing multiple crawl jobs, the RapidDev team provides OpenClaw configuration templates and batch prompt libraries for common crawl-and-extract pipelines — contact support at rapiddev.ai if you need help setting up high-volume extraction workflows beyond what the skill's defaults support. Finally, always check the robots.txt of any site before running large crawls (`https://example.com/robots.txt`). Firecrawl respects robots.txt by default, but understanding a site's crawl policies is good practice.
Map all URLs under https://docs.example.com/api, then scrape each page and extract the endpoint name, HTTP method, parameters, and description. Return as a structured table.
Paste this in OpenClaw chat
1# Example: Check robots.txt before crawling2curl https://target-site.com/robots.txt34# Example: OpenClaw config for crawl depth limit5openclaw config set skills.firecrawl-skills.default_depth 26openclaw config set skills.firecrawl-skills.max_pages 100Pro tip: Use the map mode before a full crawl on unfamiliar sites — mapping is much faster and cheaper (fewer API credits) than crawling, and it lets you see the site structure and page count before committing to a full extraction.
Expected result: You are able to extract structured data from multi-page crawls and adjust crawl parameters through OpenClaw config settings.
Common use cases
Competitor Product Catalog Extraction
Crawl a competitor's product pages to extract names, prices, and descriptions across their entire catalog. Useful for market research, pricing strategy, and understanding product positioning without manually visiting hundreds of pages.
Crawl https://example-competitor.com/products and extract all product names, prices, and short descriptions. Limit to 100 pages and return the results as a structured list.
Copy this prompt to try it in OpenClaw
Documentation Site Mapping and Ingestion
Map the full URL structure of a documentation site and then extract clean text content from each article. This is the first step in building a custom knowledge base or RAG pipeline where you need all documentation in a usable format.
First map all URLs under https://docs.example.com, then scrape the content from each article page. Filter out navigation and sidebar content. Return clean article text with the source URL for each.
Copy this prompt to try it in OpenClaw
Site Architecture Audit
Generate a complete sitemap of any website by crawling its link graph. Useful for SEO audits, finding broken internal links, identifying orphaned pages, and understanding how a large site is structured before redesigning it.
Map the complete URL structure of https://www.example.com including all subpages. Group URLs by top-level section (e.g., /blog, /products, /about) and show how many pages exist in each section.
Copy this prompt to try it in OpenClaw
Troubleshooting
Error: 'Invalid API key' or '401 Unauthorized' when starting a crawl
Cause: The FIRECRAWL_API_KEY is not set correctly in OpenClaw's config, or the key has been revoked in the Firecrawl dashboard.
Solution: Run `openclaw config get skills.firecrawl-skills.apikey` to check if the key is stored. If it returns empty, re-run `openclaw config set skills.firecrawl-skills.apikey YOUR_KEY`. Verify the key is still valid by logging into app.firecrawl.dev and checking the API Keys section. Regenerate the key if needed.
1# Re-set the API key2openclaw config set skills.firecrawl-skills.apikey fc-your-new-key-here34# Reload OpenClaw to pick up changes5openclaw reloadClawHub install fails with 'package not found' or '429 rate limit' error
Cause: The ClawHub registry index is stale (package not found) or too many install requests were made in a short period (429 rate limit).
Solution: For 'package not found': run `clawhub update` to refresh the registry index, then retry `clawhub install firecrawl-skills`. For 429 errors: wait 60 seconds and retry — ClawHub rate limits are per-IP and reset quickly.
1# Refresh registry and retry2clawhub update3clawhub install firecrawl-skillsCrawl returns incomplete results or stops early without explanation
Cause: The crawl hit Firecrawl's rate limit for your plan tier, exceeded the default page limit, or the target site blocked automated requests after detecting crawl behavior.
Solution: Check your Firecrawl dashboard for rate limit or quota warnings. Add an explicit page limit to your prompt (e.g., 'limit to 50 pages'). If the site is blocking crawls, try adding a crawl delay in OpenClaw config: `openclaw config set skills.firecrawl-skills.crawl_delay_ms 1000`. Some sites require respecting longer delays between requests.
1# Set a crawl delay to avoid being blocked2openclaw config set skills.firecrawl-skills.crawl_delay_ms 20003openclaw config set skills.firecrawl-skills.max_pages 50Skill is installed but not responding to crawl prompts in OpenClaw chat
Cause: The skill was installed but OpenClaw has not reloaded its skill registry, or there is a skill version conflict with another installed skill.
Solution: Run `openclaw reload` to refresh the skill registry. If the issue persists, check for conflicts with `clawhub list --conflicts`. Try uninstalling and reinstalling: `clawhub uninstall firecrawl-skills && clawhub install firecrawl-skills`.
1# Reload skill registry2openclaw reload34# Check for conflicts5clawhub list --conflicts67# Reinstall if needed8clawhub uninstall firecrawl-skills9clawhub install firecrawl-skillsBest practices
- Always map a site before crawling — use the map mode first to see the URL structure and page count, then decide which sections to actually scrape. This saves API credits and prevents unexpectedly large crawls.
- Set explicit page limits in your prompts for any crawl on an unfamiliar site — large sites can have thousands of pages and an uncapped crawl will exhaust your Firecrawl quota quickly.
- Store your FIRECRAWL_API_KEY via `openclaw config set` rather than in plain text .env files — the config store encrypts credentials at rest.
- Check the target site's robots.txt before running large crawls — Firecrawl respects robots.txt by default, but reviewing it first tells you which sections are explicitly off-limits and avoids wasting credits on disallowed paths.
- Use firecrawl-skills for known-site bulk extraction and firecrawl-search for discovery queries — choosing the right tool for each task keeps your Firecrawl usage efficient and responses faster.
- Configure a crawl delay for sites with aggressive rate limiting — `openclaw config set skills.firecrawl-skills.crawl_delay_ms 1500` adds a 1.5 second pause between page requests, which dramatically reduces the chance of being blocked.
- Test extraction logic on a single page before launching a multi-page crawl — scrape one representative page first to confirm the output format matches expectations, then scale up to the full site.
- Monitor your Firecrawl dashboard for usage and quota warnings — set up email alerts for when you approach plan limits, especially before running large batch crawls.
Alternatives
Firecrawl Search is better when you want to search for pages matching a query rather than crawling a known site — use it for discovery, use firecrawl-skills for systematic extraction from URLs you already know.
Deep Scraper focuses on recursive link-following without requiring an external API key, making it a good choice for simpler deep crawls when you do not need Firecrawl's AI-powered structured extraction.
Tavily Web Search is better for broad web searches across many domains rather than targeted extraction from specific sites — choose Tavily when you do not know which site has the data you need.
Web Search Plus aggregates results from multiple search engines and is better for research and discovery rather than structured data extraction from known websites.
Frequently asked questions
How do I install Firecrawl Skills in OpenClaw?
Run `clawhub install firecrawl-skills` in your terminal. After installation, configure your Firecrawl API key with `openclaw config set skills.firecrawl-skills.apikey YOUR_KEY`. Restart OpenClaw and the skill will be available in chat immediately.
What is the difference between firecrawl-skills and firecrawl-search in OpenClaw?
Firecrawl-skills provides the full crawling and mapping toolkit — it crawls multiple pages of a site, follows links recursively, maps site structure, and extracts bulk data. Firecrawl-search is optimized for single search queries where you describe what you are looking for and it returns relevant pages from across the web. Both use the same FIRECRAWL_API_KEY, but they are different skills installed separately.
OpenClaw Firecrawl Skills API key configuration — where does the key go?
Store your Firecrawl API key using the OpenClaw config command: `openclaw config set skills.firecrawl-skills.apikey YOUR_KEY`. Alternatively, add `FIRECRAWL_API_KEY=your-key` to the `.env` file in your OpenClaw data directory. Never put the key directly in a prompt or code file.
ClawHub install firecrawl-skills not working — how do I fix it?
First run `clawhub update` to refresh the registry index, then retry the install. If you get a 429 error, wait 60 seconds before retrying — ClawHub has per-IP rate limits that reset quickly. If the issue persists, check that your ClawHub CLI is up to date with `clawhub --version` and update if needed.
Does firecrawl-skills respect robots.txt?
Yes — Firecrawl respects robots.txt by default, so the skill will skip pages that the target site has marked as off-limits for crawlers. You can review a site's robots.txt manually at https://example.com/robots.txt before running large crawls to understand which sections are accessible.
Can I use Firecrawl Skills without a paid Firecrawl plan?
Yes — Firecrawl offers a free tier with a limited monthly page allowance, which is sufficient for testing and small projects. For bulk crawls across hundreds of pages, you will likely need a paid Firecrawl plan. Check your usage in the Firecrawl dashboard at app.firecrawl.dev.
How do I get help if my firecrawl-skills crawls are not returning expected data?
Start by testing a single-page scrape on a known page to verify the skill is working at all. If results look incomplete, try being more specific in your prompt about which data fields to extract. For complex multi-page extraction pipelines, the RapidDev team offers OpenClaw configuration consulting — reach out at rapiddev.ai for help with large-scale crawl setups.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation