About
This skill provides a comprehensive framework for implementing Retrieval-Augmented Generation (RAG) architectures within LLM applications. It guides developers through the entire pipeline, including document ingestion, advanced chunking strategies, embedding generation, and integration with leading vector databases like Pinecone and Chroma. By utilizing sophisticated retrieval patterns such as hybrid search, reranking, and contextual compression, this skill enables the creation of AI systems that deliver accurate, source-cited responses while significantly reducing hallucinations in domain-specific tasks.