We've created a service wrapper for the Firecrawl API that provides two main functionalities: web scraping and crawling. Here's a breakdown of how it works:
-
Service Factory
- Creates a service instance using an API key
- Returns an object with two methods:
getScrapeData
andgetCrawlData
-
Web Scraping (
getScrapeData
)- Fetches and extracts content from a single webpage
- Returns structured data including page content and metadata
- Endpoint:
https://api.firecrawl.dev/v1/scrape
-
Web Searching (
getSearchData
)- Searches for data based on conversations
- Endpoint:
https://api.firecrawl.dev/v1/search
The plugin requires minimal configuration. In your character file, simply add:
{
"FIRECRAWL_API_KEY": "your-api-key-here"
}
The plugin recognizes various ways users might request web scraping:
// Single URL request
"Can you scrape the content from https://example.com?"
"Get the data from www.example.com/page"
// Two-step interaction
User: "I need to scrape some website data."
Agent: "I can help you scrape website data. Please share the URL you'd like me to process."
User: "example.com/products"
The plugin handles different crawling request patterns:
// Direct search
"Find the latest news about SpaceX launches"
"Can you find details about the iPhone 16 release?"
The plugin automatically:
- Validates URLs before processing
- Handles both direct and conversational requests
- Provides appropriate feedback during the scraping/crawling process
- Returns structured data from the target website
The plugin includes built-in error handling for common scenarios:
- Invalid or missing URLs
- API authentication issues
- Network failures
- Malformed responses
The plugin provides two main actions:
FIRECRAWL_GET_SCRAPED_DATA
: For single-page content extractionWEB_SEARCH
: Web search for any data
- API keys should be kept secure and never shared
- All requests are made over HTTPS
- Input validation is performed on all URLs before processing