TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
© 2026 TacoSkill LAB
AboutPrivacyTerms
  1. Home
  2. /
  3. SkillHub
  4. /
  5. nanogpt
Improve

nanogpt

7.0

by zechenzhangAGI

92Favorites
176Upvotes
0Downvotes

Educational GPT implementation in ~300 lines. Reproduces GPT-2 (124M) on OpenWebText. Clean, hackable code for learning transformers. By Andrej Karpathy. Perfect for understanding GPT architecture from scratch. Train on Shakespeare (CPU) or OpenWebText (multi-GPU).

transformer

7.0

Rating

0

Installs

AI & LLM

Category

Quick Review

Excellent educational skill for GPT training with comprehensive workflows, clear examples, and proper structure. The SKILL.md provides complete command sequences for multiple use cases (Shakespeare training, GPT-2 reproduction, fine-tuning, custom datasets) with realistic outputs and troubleshooting. Documentation is well-organized with references separated appropriately. The description accurately reflects capabilities and a CLI agent could invoke any workflow based solely on the main documentation. Novelty score is moderate because while the skill packages educational code nicely, the underlying simplicity (intentionally ~300 lines) means a capable agent could potentially construct similar solutions, though this skill does save significant setup and experimentation time. Minor deductions only for the novelty dimension - all other aspects are exemplary for an educational implementation skill.

LLM Signals

Description coverage9
Task knowledge9
Structure9
Novelty6

GitHub Signals

891
74
19
2
Last commit 0 days ago

Publisher

zechenzhangAGI

zechenzhangAGI

Skill Author

Related Skills

rag-architectprompt-engineerfine-tuning-expert

Loading SKILL.md…

Try onlineView on GitHub

Publisher

zechenzhangAGI avatar
zechenzhangAGI

Skill Author

Related Skills

rag-architect

Jeffallan

7.0

prompt-engineer

Jeffallan

7.0

fine-tuning-expert

Jeffallan

6.4

mcp-developer

Jeffallan

6.4
Try online