About
This skill provides comprehensive guidance for building production-ready RAG architectures, enabling LLMs to access proprietary data and real-time information. It covers the entire pipeline from document chunking and embedding generation to advanced retrieval strategies like hybrid search, reranking, and contextual compression. Whether you're building a documentation assistant, a research tool, or a domain-specific Q&A system, this skill ensures your AI applications provide accurate, cited, and hallucination-free responses by leveraging industry-standard tools and implementation patterns.