Discover Agent Skills for web scraping & data collection. Browse 17 skills for Claude, ChatGPT & Codex.
Fetches and ranks WeChat articles based on research interests with seamless Obsidian integration.
Standardizes research by creating traceable YAML evidence objects with semantic IDs and confidence scores to ensure data-driven decision making.
Conducts exhaustive, multi-perspective research using parallel agent workers and iterative gap detection.
Conducts systematic web investigations by deploying parallel subagents to gather, verify, and synthesize information from multiple sources.
Conducts multi-threaded, deep-dive web investigations by orchestrating parallel subagents to gather and synthesize complex information.
Empowers Claude with semantic, neural search capabilities and specialized web filtering using the Exa API.
Performs structured web searches via DuckDuckGo to retrieve real-time documentation, library information, and technical solutions.
Downloads and converts YouTube videos into high-quality audio files using yt-dlp and ffmpeg.
Conducts structured, multi-threaded web research by delegating sub-tasks to specialized agents and synthesizing findings into comprehensive reports.
Downloads YouTube videos and audio with customizable quality, format, and output settings using yt-dlp.
Scrapes and analyzes competitor advertisements from major ad libraries to uncover successful messaging and creative patterns.
Scrapes and analyzes competitor advertisements from major ad libraries to uncover successful messaging, creative patterns, and market positioning.
Downloads YouTube videos and audio with customizable quality and format settings directly within Claude Code.
Extracts clean markdown and structured data from any website, including JavaScript-heavy single-page applications.
Extracts and cleans transcripts, subtitles, and captions from YouTube videos using yt-dlp and OpenAI Whisper for AI-powered transcription.
Downloads and processes YouTube video transcripts in bulk from CSV files or individual URLs into structured Markdown format.
Searches the Wikidata knowledge base to retrieve structured entity details and universal external identifiers.
Performs real-time web searches using Tavily's LLM-optimized engine to retrieve filtered snippets, scores, and metadata.
Extracts clean markdown or text content from specific URLs using the Tavily API without requiring custom scraping scripts.
Extracts clean, plain text from EPUB, MOBI, and PDF files for analysis and data processing.
Generates and validates robust regex-based HTML parsing rules to extract article titles, links, and metadata from webpages.
Researches Reddit communities, threads, and wikis to gather community insights and generate structured research reports.
End of results