TDD-style testing methodology for skills using fresh subagent instances to prevent priming bias and validate skill effectiveness. Triggers: test skill, validate skill, skill testing, subagent testing, fresh instance testing, TDD for skills, skill validation Use when: validating skill improvements, testing skill effectiveness, preventing priming bias, measuring skill impact on behavior DO NOT use when: implementing skills (use skill-authoring instead), creating hooks (use hook-authoring instead)
6.6
Rating
0
Installs
Testing & Quality
Category
Excellent skill addressing a sophisticated testing challenge. The description clearly articulates when and why to use fresh subagent instances to prevent priming bias in skill validation. The TDD-style three-phase methodology (baseline/with-skill/rationalization) is well-structured and practical. SKILL.md provides a clear overview with appropriate references to detailed modules. The skill tackles a genuinely novel problem—testing AI skills in isolation—that would be difficult and token-intensive for a CLI agent to reason through independently. Strong success criteria with quantifiable metrics (≥50% improvement, ≥80% rationalization defense). Minor room for improvement in providing more concrete examples directly in SKILL.md, though referenced modules presumably contain these details.
Loading SKILL.md…

Skill Author