Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.
7.6
Rating
0
Installs
AI & LLM
Category
Exceptional skill for parameter-efficient fine-tuning of large language models. The description clearly articulates when to use PEFT vs alternatives, and the SKILL.md provides comprehensive code examples for LoRA, QLoRA, multi-adapter serving, and integration with popular frameworks. Includes actionable parameter selection guides, performance benchmarks, troubleshooting, and best practices. Structure is logical with good use of tables and code snippets, though the main file is quite long. The skill addresses a high-value, token-intensive task (fine-tuning 7B-70B models) that would be difficult for a CLI agent to accomplish without this structured guidance, making it highly novel and cost-effective.
Loading SKILL.md…