TacoSkill LAB
TacoSkill LAB
HomeSkillHubCreatePlaygroundSkillKit
© 2026 TacoSkill LAB
AboutPrivacyTerms
  1. Home
  2. /
  3. SkillHub
  4. /
  5. nanogpt
Improve

nanogpt

7.0

by zechenzhangAGI

82Favorites
186Upvotes
0Downvotes

Educational GPT implementation in ~300 lines. Reproduces GPT-2 (124M) on OpenWebText. Clean, hackable code for learning transformers. By Andrej Karpathy. Perfect for understanding GPT architecture from scratch. Train on Shakespeare (CPU) or OpenWebText (multi-GPU).

gpt

7.0

Rating

0

Installs

AI & LLM

Category

Quick Review

Excellent educational skill for GPT implementation with comprehensive workflows covering Shakespeare training, GPT-2 reproduction, fine-tuning, and custom datasets. The description is clear and actionable, with detailed code examples, configs, and troubleshooting. Structure is well-organized with logical sections and appropriate references to separate files for advanced topics. Task knowledge is thorough with complete end-to-end pipelines, hardware requirements, and performance benchmarks. Novelty is moderate-to-good: while the underlying nanoGPT is well-documented externally, this skill provides valuable workflow orchestration, troubleshooting guidance, and decision frameworks that reduce token usage compared to researching documentation. Minor improvement area: could benefit from more explicit CLI invocation patterns for agents to follow programmatically.

LLM Signals

Description coverage9
Task knowledge9
Structure8
Novelty7

GitHub Signals

957
83
19
2
Last commit 2 days ago

Publisher

zechenzhangAGI

zechenzhangAGI

Skill Author

Related Skills

rag-architectprompt-engineerfine-tuning-expert

Loading SKILL.md…

Try onlineView on GitHub

Publisher

zechenzhangAGI avatar
zechenzhangAGI

Skill Author

Related Skills

rag-architect

Jeffallan

7.0

prompt-engineer

Jeffallan

7.0

fine-tuning-expert

Jeffallan

6.4

mcp-developer

Jeffallan

6.4
Try online