TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
© 2026 TacoSkill LAB
AboutPrivacyTerms
  1. Home
  2. /
  3. SkillHub
  4. /
  5. hqq-quantization
Improve

hqq-quantization

8.6

by davila7

80Favorites
285Upvotes
0Downvotes

Half-Quadratic Quantization for LLMs without calibration data. Use when quantizing models to 4/3/2-bit precision without needing calibration datasets, for fast quantization workflows, or when deploying with vLLM or HuggingFace Transformers.

quantization

8.6

Rating

0

Installs

AI & LLM

Category

Quick Review

Excellent skill for HQQ quantization with comprehensive coverage of capabilities, workflows, and integrations. The description clearly conveys when to use HQQ vs alternatives. Task knowledge is outstanding with complete code examples for quantization, HuggingFace/vLLM integration, PEFT fine-tuning, and multiple backends. Structure is clean with logical progression from basics to advanced workflows, though slightly dense in the main file. Novelty is strong - HQQ's calibration-free approach and multi-backend support would require significant token usage and research for a CLI agent to replicate, making this skill meaningfully cost-effective. Minor improvement possible by moving some advanced backend details to referenced files.

LLM Signals

Description coverage9
Task knowledge10
Structure9
Novelty8

GitHub Signals

18,069
1,635
132
71
Last commit 0 days ago

Publisher

davila7

davila7

Skill Author

Related Skills

rag-architectprompt-engineerfine-tuning-expert

Loading SKILL.md…

Try onlineView on GitHub

Publisher

davila7 avatar
davila7

Skill Author

Related Skills

rag-architect

Jeffallan

7.0

prompt-engineer

Jeffallan

7.0

fine-tuning-expert

Jeffallan

6.4

mcp-developer

Jeffallan

6.4
Try online