Ensures skill and Model Context Protocol (MCP) implementations align with their manifests by performing Codex-powered semantic comparisons of code against descriptions, preconditions, and effects.
Skill Validator provides a rigorous automated framework for maintaining parity between what an AI capability promises and what the code actually delivers. By utilizing Codex for deep semantic analysis, it audits implementations to detect feature drift, verify that preconditions are explicitly checked in source files, and confirm that all claimed side effects are realized. This tool is essential for developers and quality assurance teams looking to maintain high-quality, trustworthy AI tool directories by identifying over-promised capabilities, missing functionality, or undocumented features prior to release.
Key Features
01Automated coverage scoring and detailed drift detection reporting
02Effect verification to confirm implementation produces all claimed results
031 GitHub stars
04Precondition validation to ensure requirements are explicitly enforced in code
05Semantic comparison using Codex to align code logic with manifest descriptions
06API surface analysis to validate exported functions against definitions
Use Cases
01Auditing skill implementations after updates to prevent description-implementation drift
02Verifying the accuracy of Model Context Protocol (MCP) resources before release
03Integrating automated quality gates into CI/CD pipelines for AI capability development