Model Evaluation Metrics - Auto-activating skill for ML Training. Triggers on: model evaluation metrics, model evaluation metrics Part of the ML Training skill category.
4.0
Rating
0
Installs
Machine Learning
Category
The skill has clear structure and addresses a useful ML domain (model evaluation metrics), but severely lacks specific implementation details. The description is too generic - it doesn't specify which metrics (accuracy, precision, recall, F1, ROC-AUC, etc.) or provide concrete guidance on calculating, comparing, or interpreting them. There's no actual task knowledge (code, formulas, decision trees for metric selection). A CLI agent would not know how to help beyond generic responses. The novelty score reflects that proper metric evaluation can be complex (choosing appropriate metrics per problem type, handling imbalanced data, cross-validation strategies), but without implementation content, the skill provides no actual value over a standard LLM query.
Loading SKILL.md…