TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
© 2026 TacoSkill LAB
AboutPrivacyTerms
  1. Home
  2. /
  3. SkillHub
  4. /
  5. model-evaluation-metrics
Improve

model-evaluation-metrics

4.0

by jeremylongshore

187Favorites
101Upvotes
0Downvotes

Model Evaluation Metrics - Auto-activating skill for ML Training. Triggers on: model evaluation metrics, model evaluation metrics Part of the ML Training skill category.

evaluation

4.0

Rating

0

Installs

Machine Learning

Category

Quick Review

The skill has clear structure and addresses a useful ML domain (model evaluation metrics), but severely lacks specific implementation details. The description is too generic - it doesn't specify which metrics (accuracy, precision, recall, F1, ROC-AUC, etc.) or provide concrete guidance on calculating, comparing, or interpreting them. There's no actual task knowledge (code, formulas, decision trees for metric selection). A CLI agent would not know how to help beyond generic responses. The novelty score reflects that proper metric evaluation can be complex (choosing appropriate metrics per problem type, handling imbalanced data, cross-validation strategies), but without implementation content, the skill provides no actual value over a standard LLM query.

LLM Signals

Description coverage2
Task knowledge1
Structure5
Novelty4

GitHub Signals

1,046
135
8
0
Last commit 0 days ago

Publisher

jeremylongshore

jeremylongshore

Skill Author

Related Skills

ml-pipelinesparse-autoencoder-traininghuggingface-accelerate

Loading SKILL.md…

Try onlineView on GitHub

Publisher

jeremylongshore avatar
jeremylongshore

Skill Author

Related Skills

ml-pipeline

Jeffallan

6.4

sparse-autoencoder-training

zechenzhangAGI

7.6

huggingface-accelerate

zechenzhangAGI

7.6

moe-training

zechenzhangAGI

7.6
Try online