TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
© 2026 TacoSkill LAB
AboutPrivacyTerms
  1. Home
  2. /
  3. SkillHub
  4. /
  5. skill-judge
Improve

skill-judge

8.7

by davila7

116Favorites
413Upvotes
0Downvotes

Evaluate Agent Skill design quality against official specifications and best practices. Use when reviewing, auditing, or improving SKILL.md files and skill packages. Provides multi-dimensional scoring and actionable improvement suggestions.

evaluation

8.7

Rating

0

Installs

AI & LLM

Category

Quick Review

Exceptional meta-skill that encodes expert knowledge for evaluating AI skills. Provides a sophisticated 8-dimensional evaluation framework (120 points) with deep insights into knowledge delta, mindset patterns, and anti-patterns that Claude would not generate independently. The description is comprehensive with clear WHAT (multi-dimensional scoring), WHEN (reviewing/auditing SKILL.md files), and keywords (skill evaluation, SKILL.md). Structure is excellent with clear protocol, decision frameworks, and quick reference checklist. The novelty is outstanding—this codifies nuanced judgments about token economics, knowledge externalization, and the subtle distinction between expert/activation/redundant knowledge that would require significant reasoning tokens for a CLI agent to derive. Minor improvement: description could add more trigger keywords like 'score', 'audit', 'quality check'. This is production-ready expert-level content that demonstrates the very principles it teaches.

LLM Signals

Description coverage9
Task knowledge10
Structure9
Novelty10

GitHub Signals

18,239
1,655
133
73
Last commit 0 days ago

Publisher

davila7

davila7

Skill Author

Related Skills

rag-architectprompt-engineerfine-tuning-expert

Loading SKILL.md…

Try onlineView on GitHub

Publisher

davila7 avatar
davila7

Skill Author

Related Skills

rag-architect

Jeffallan

7.0

prompt-engineer

Jeffallan

7.0

fine-tuning-expert

Jeffallan

6.4

mcp-developer

Jeffallan

6.4
Try online