Verifies that AI process documentation and specialized skills are effective and resistant to rationalization by applying a test-driven development cycle to their creation.
This skill provides a rigorous framework for developing and validating AI behavior guidelines by treating process documentation like software code. By applying the Red-Green-Refactor cycle—running baseline tests without the skill, writing instructions to address failures, and iteratively plugging loopholes—it ensures that Claude adheres to discipline-enforcing rules even under high-pressure scenarios involving time constraints or sunk costs. It is particularly valuable for creating 'bulletproof' skills that prevent common AI rationalizations and ensure long-term compliance with complex workflows.
Key Features
01Red-Green-Refactor cycle for process documentation
02Rationalization capture and explicit negation strategies
03Bulletproofing checklists for verifying skill reliability
040 GitHub stars
05Pressure scenario design with multiple conflicting constraints
06Meta-testing framework for identifying documentation ambiguity
Use Cases
01Validating new coding standards to ensure they aren't ignored during tight deadlines
02Hardening internal process guidelines against common AI excuses and 'spirit vs. letter' arguments
03Creating discipline-enforcing skills like TDD where the AI might be tempted to cut corners