Extracts high-speed, read-only markdown content from documentation, blogs, and static websites.
The Content Crawler skill enables Claude to perform zero-latency markdown extraction from web sources. It is specifically optimized for rapid data retrieval from static pages like technical documentation and blogs, bypassing the overhead of traditional browser automation. By utilizing the @just-every/crawl utility via Bash, it can fetch single pages or spider through sitemaps to generate structured JSON data, making it an essential tool for gathering context and reference material during the development process.
Key Features
011 GitHub stars
02Structured JSON output for site-wide crawls
03High-speed read-only web extraction
04Zero-latency markdown conversion
05Automated sitemap spidering up to 20 pages
06Automatic failure detection for dynamic content
Use Cases
01Fetching technical documentation for API integration context
02Extracting blog post content for research and summarization
03Generating a site map of static pages for content analysis