Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,14 @@
}
]
},
{
"group": "Partnerships",
"pages": [
"partnerships/n8n",
"partnerships/lovable",
"partnerships/openrouter"
]
},
{
"group": "Dashboard",
"pages": [
Expand Down
73 changes: 73 additions & 0 deletions partnerships/lovable.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
---
title: "Firecrawl + Lovable"
description: "Build web data apps without writing code. Firecrawl is integrated into Lovable so you can create apps that scrape, search, and interact with live web data by describing what you want."
og:title: "Firecrawl + Lovable Partnership | Firecrawl"
og:description: "Build web data apps without writing code using Firecrawl and Lovable."
---

Firecrawl turns any website into clean, LLM-ready data. [Lovable](https://lovable.dev) is an AI-powered app builder that turns ideas into working full-stack apps without writing code. Together, anyone can build apps that scrape, search, and interact with live web data — just by describing what they want in plain English.

Firecrawl is integrated directly into Lovable's connector system. Connect once and start prompting — Lovable generates the frontend, backend (on Supabase Edge Functions), and Firecrawl integration automatically.

## What you can build

Prompt Lovable to create apps that use Firecrawl to:

- **Scrape websites** — Extract content from any webpage in clean, structured formats
- **Search the web** — Perform searches and pull relevant content from results
- **Map entire sites** — Automatically discover all URLs across a domain
- **Crawl websites** — Recursively gather data across entire websites

Example app ideas:

- **Brand analyzers** that pull branding, content, and SEO insights from any URL
- **Trend trackers** that research topics across the web and return structured insights
- **Real-time AI assistants** that search the web and answer queries with live information
- **Job aggregators** that scrape postings from job boards based on custom keywords
- **Competitive intelligence dashboards** that monitor competitor websites for changes
- **Lead generation apps** that extract contact information from company websites

## How to connect

1. Turn on [Lovable Cloud](https://lovable.dev/cloud)
2. Connect Firecrawl — you get free tokens when you do this. No separate sign-up or API key management needed.
3. Start prompting — Lovable runs backend calls on Supabase Edge Functions and connects Firecrawl directly

When you prompt Lovable to create an app that needs web data, it automatically generates the interface, integrates Firecrawl API calls, structures the data, and creates the full working application.

## Tips for building

### Prompt structure

For best results, structure your prompts with:

1. **Context** — What is this thing?
2. **User flow** — What will users see and do?
3. **Technical hints** — Which Firecrawl endpoint to use, what data format you need
4. **Design guidance** — Colors, style, feel

### Choosing the right endpoint

- **Scrape endpoint** — Fast. Grabs page content in whatever format you want. Best for known URLs and quick page grabs.
- **Agent endpoint** — Slower but smarter. Does autonomous research across the web, makes judgments, cross-references sources. Best when the data could be anywhere.

### Handling agent timeouts

Agent requests can take a few minutes. Two approaches:

- **Webhooks** (production) — Firecrawl notifies your app when research is done
- **User-triggered polling** (prototypes) — Store the agent job ID and let users click a button to fetch results when ready

### Iterate over perfecting

Don't overthink your first prompt. Build something basic, test it, see what's missing, and refine. The iteration loop is where the real value emerges.

## Billing and access

Create your Firecrawl account and API key directly inside Lovable — no context switching required. Billing is managed through Firecrawl. Learn more about pricing at [firecrawl.dev/pricing](https://firecrawl.dev/pricing).

## Learn more

- [Building AI-Powered Apps with Firecrawl and Lovable](https://www.firecrawl.dev/blog/firecrawl-lovable-tutorial) — Step-by-step tutorial building a brand analyzer and trend tracker
- [Firecrawl + Lovable Announcement](https://www.firecrawl.dev/blog/firecrawl-lovable-integration) — Original partnership announcement
- [Lovable](https://lovable.dev) — Sign up and start building
100 changes: 100 additions & 0 deletions partnerships/n8n.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
---
title: "Firecrawl + n8n"
description: "Firecrawl is natively integrated into n8n Cloud. Connect in one step, no API keys needed, and get a free Hobby plan with 100,000 credits."
og:title: "Firecrawl + n8n Partnership | Firecrawl"
og:description: "Firecrawl is natively integrated into n8n Cloud. Connect in one step, no API keys needed, and get a free Hobby plan with 100,000 credits."
---

Firecrawl turns any website into clean, LLM-ready data. n8n is the most popular AI workflow builder. Together, you get a complete pipeline: Firecrawl handles the data extraction, n8n handles the orchestration, logic, and connections to everything else in your stack.

Firecrawl is a **native integration on n8n Cloud**. Install the Firecrawl node, click Connect, and you're ready to build — no separate sign-up, no API keys to track down.

## What you can build

With Firecrawl connected, your n8n workflows can:

- **Scrape websites** — Extract content from any webpage in clean, structured formats
- **Search the web** — Perform searches and pull relevant content from results
- **Interact with pages** — Automate browser actions after scraping
- **Map entire sites** — Automatically discover all URLs across a domain
- **Crawl websites** — Recursively gather data across entire websites

The Firecrawl node exposes the full API surface. You can also use Firecrawl as a tool for n8n's AI Agent nodes, letting agents decide what to fetch and how to interpret it.

## Launch offer

When you connect Firecrawl through n8n Cloud, you get:

- A **free Hobby plan** with 3,000 credits per month
- **5 concurrent browsers**
- **100,000 one-time promotional credits** to get started

Credit costs:
- Scraping: 1 credit per page
- Search: 2 credits per 10 results
- Browser interaction: 2 credits per minute

The 100,000 promotional credits are a one-time allotment and do not reset monthly. You can upgrade anytime from the [Firecrawl pricing page](https://firecrawl.dev/pricing) — n8n is not involved in billing. Upgrades take effect immediately with prorated billing.

## How to connect

### On n8n Cloud

1. In your workflow, open the Nodes Panel and search for **Firecrawl**
2. Add the node and click **Connect to Firecrawl**
3. Enter your email — n8n creates your Firecrawl account automatically
4. Done. Your free Hobby plan and promotional credits are active

### On self-hosted n8n

An admin needs to enable community nodes in the Admin Panel and install the Firecrawl node (`n8n-nodes-firecrawl`). Once installed, anyone on the instance can connect by adding the node to a workflow and entering their Firecrawl API key from [firecrawl.dev/app](https://www.firecrawl.dev/app).

<Note>
If you already have a Firecrawl account, a new team linked to n8n will be created and the promotional credits will be applied there.
</Note>

## Official starter templates

<CardGroup cols={2}>
<Card title="Pinecone RAG Pipeline" icon="tree" href="https://n8n.io/workflows/13964-scrape-and-ingest-web-pages-into-a-pinecone-rag-stack-with-firecrawl-and-openai/">
Scrape pages into vector embeddings and store in Pinecone for RAG-powered knowledge bases.
</Card>

<Card title="Supabase pgvector Pipeline" icon="database" href="https://n8n.io/workflows/13911-scrape-and-ingest-web-content-into-supabase-pgvector-with-firecrawl/">
Same ingestion pattern with Supabase — open-source, self-hostable, with built-in deduplication.
</Card>

<Card title="Lead Enrichment" icon="building" href="https://n8n.io/workflows/13910-enrich-company-leads-with-firecrawl-openrouter-ai-and-supabase/">
Point at any company website and get structured business signals: industry, pricing model, funding stage, tech stack, and hiring status.
</Card>

<Card title="Browse All Templates" icon="magnifying-glass" href="https://n8n.io/workflows?search=firecrawl">
Explore community-created Firecrawl workflows on n8n
</Card>
</CardGroup>

## FAQ

<AccordionGroup>
<Accordion title="Do I need a Firecrawl account before connecting through n8n?">
No. When you install the Firecrawl node on n8n Cloud and click Connect, n8n creates your Firecrawl account automatically using your email address. If you already have a Firecrawl account, a new team linked to n8n will be created and the promotional credits will be applied there.
</Accordion>

<Accordion title="How long do the 100,000 promotional credits last?">
The 100,000 credits are a one-time allotment — they don't reset monthly. Once used, you can subscribe to the Hobby plan ($19/mo) for 3,000 credits every month, or upgrade to a higher tier for more volume.
</Accordion>

<Accordion title="Does the Firecrawl node support /agent?">
The native Firecrawl node covers scrape, crawl, search, map, and interact operations. For the /agent endpoint, use an HTTP Request node pointed at the Firecrawl API directly — this works in both Cloud and self-hosted environments.
</Accordion>

<Accordion title="Where can I track my Firecrawl usage?">
Head to [firecrawl.dev/app](https://www.firecrawl.dev/app) to see credit usage, manage API keys, and track everything. You can also check usage programmatically via the Credit Usage API.
</Accordion>
</AccordionGroup>

## Learn more

- [Step-by-step n8n integration guide](/developer-guides/workflow-automation/n8n) — Full tutorial for building Firecrawl workflows in n8n
- [8 n8n Web Scraping Templates](https://www.firecrawl.dev/blog/n8n-web-scraping-workflow-templates) — Production-ready workflow patterns for market intelligence, lead generation, and more
- [n8n Documentation](https://docs.n8n.io/) — Official n8n platform docs
68 changes: 68 additions & 0 deletions partnerships/openrouter.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
---
title: "Firecrawl + OpenRouter"
description: "Combine Firecrawl's web scraping with OpenRouter's unified AI model access for intelligent data enrichment workflows."
og:title: "Firecrawl + OpenRouter | Firecrawl"
og:description: "Combine Firecrawl's web scraping with OpenRouter's unified AI model access for intelligent data enrichment workflows."
---

[OpenRouter](https://openrouter.ai/) provides a single API that gives you access to hundreds of AI models — from OpenAI and Anthropic to open-source models like Llama and Mistral. Combined with Firecrawl, you can build workflows that scrape web data and process it with any AI model through one integration point.

## Lead enrichment with Firecrawl, OpenRouter, and Supabase

The primary use case for this combination is the **lead enrichment workflow**, available as an official Firecrawl + n8n starter template. It demonstrates how Firecrawl, OpenRouter, and Supabase work together to turn any company URL into structured business intelligence.

### How it works

1. **Firecrawl scrapes** the target company website, returning clean, structured content
2. **OpenRouter AI processes** the scraped data using your choice of model, extracting structured business signals
3. **Supabase stores** the enriched data for querying and analysis

### What you get back

From a single company URL, the workflow extracts:

- **Industry classification** and business category
- **Pricing model** and tier structure
- **Funding stage** and investment signals
- **Tech stack** indicators
- **Hiring status** and growth signals
- **Product positioning** and competitive landscape

### Why OpenRouter

OpenRouter lets you swap AI models without changing your workflow. Start with a cost-effective model for bulk processing, switch to a more capable model for complex analysis, or A/B test different models to find the best fit. All through the same OpenAI-compatible API.

## Get started

<CardGroup cols={2}>
<Card title="n8n Template: Lead Enrichment" icon="building" href="https://n8n.io/workflows/13910-enrich-company-leads-with-firecrawl-openrouter-ai-and-supabase/">
Import the ready-to-use lead enrichment workflow into n8n
</Card>

<Card title="OpenRouter" icon="robot" href="https://openrouter.ai/">
Sign up for OpenRouter and get API access to hundreds of AI models
</Card>
</CardGroup>

## Building your own workflow

You can use the Firecrawl + OpenRouter pattern beyond lead enrichment:

- **Content analysis** — Scrape articles or blog posts with Firecrawl, then use OpenRouter to summarize, categorize, or extract sentiment
- **Market research** — Crawl competitor websites and use AI to identify trends, pricing strategies, or feature gaps
- **Data normalization** — Scrape product listings from multiple sites and use AI to standardize the data into a consistent schema
- **Knowledge base enrichment** — Scrape documentation or support pages and use AI to generate Q&A pairs, summaries, or categorizations

### Integration pattern

The general pattern works with any orchestration tool (n8n, Zapier, Make, or custom code):

1. Use Firecrawl's [scrape](/features/scrape), [crawl](/features/crawl), or [agent](/features/agent) endpoints to get web data
2. Pass the structured output to OpenRouter's API with your processing prompt
3. Store or route the enriched results to your destination

## Learn more

- [Firecrawl + n8n Partnership](/partnerships/n8n) — Native integration for building automation workflows
- [n8n Web Scraping Templates](https://www.firecrawl.dev/blog/n8n-web-scraping-workflow-templates) — Eight production-ready workflow patterns including the OpenRouter lead enrichment template
- [OpenRouter Documentation](https://openrouter.ai/docs) — API reference and model catalog