About
This skill provides a comprehensive framework for building Retrieval-Augmented Generation (RAG) pipelines within Claude Code, enabling developers to ground LLM responses in domain-specific data. It covers the complete technical stack including document chunking strategies, embedding generation, vector store management with Pandas, ChromaDB, or FAISS, and the construction of conversational RAG chains using LangChain. It is essential for building accurate Q&A systems, reducing hallucinations, and allowing AI models to interact with up-to-date information not present in their training data.