NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation, fact-checking, hallucination detection, PII filtering, toxicity detection. Uses Colang 2.0 DSL for programmable rails. Production-ready, runs on T4 GPU.
7.0
Rating
0
Installs
AI & LLM
Category
Excellent skill documentation for NeMo Guardrails. The description clearly covers runtime safety capabilities (jailbreak detection, PII filtering, hallucination detection, etc.), making it straightforward for a CLI agent to invoke. Task knowledge is comprehensive with 5 detailed workflows covering common use cases from basic input validation to LlamaGuard integration, complete with working code examples. Structure is clean with logical sections (Quick start, Common workflows, When to use, Common issues, Advanced topics). The skill demonstrates strong novelty—implementing production-grade LLM safety with programmable Colang 2.0 DSL would be token-intensive and complex for a CLI agent alone. Minor improvements could include more explicit parameter documentation and edge case handling. References to external files (colang-guide.md, integrations.md, performance.md) are appropriately mentioned for advanced topics without cluttering the main document.
Loading SKILL.md…