This skill provides a comprehensive framework for designing and refining prompts to maximize the performance and reliability of Claude and other LLMs. It offers standardized patterns for few-shot examples, multi-step reasoning traces, and reusable templates, while incorporating Anthropic's best practices for agentic behavior and context window management. Whether you are building sub-agents, automation hooks, or complex Git workflows, this skill ensures high-quality outputs by managing instruction hierarchy, error recovery, and token efficiency.
Key Features
01Context window and token efficiency management
02Few-shot and Chain-of-Thought reasoning implementations
03Production-grade prompt template and variable systems
04Persuasion-based agent communication principles
05Systematic optimization and A/B testing frameworks
060 GitHub stars