Use this skill for reinforcement learning tasks including training RL agents (PPO, SAC, DQN, TD3, DDPG, A2C, etc.), creating custom Gym environments, implementing callbacks for monitoring and control, using vectorized environments for parallel training, and integrating with deep RL workflows. This skill should be used when users request RL algorithm implementation, agent training, environment design, or RL experimentation.
8.7
Rating
0
Installs
Machine Learning
Category
Excellent reinforcement learning skill with comprehensive coverage of Stable Baselines3 capabilities. The description clearly articulates when to use the skill (RL training, custom environments, callbacks, vectorization). Task knowledge is outstanding with detailed code examples, gotchas, and workflow guidance covering training, evaluation, custom environments, and advanced features. Structure is clean with a well-organized SKILL.md that provides inline examples while delegating detailed references and templates to separate files. High novelty as RL workflows involve complex multi-step processes (environment validation, algorithm selection, hyperparameter tuning, callback configuration) that would consume many tokens if done from scratch by a CLI agent. Minor room for improvement: could slightly expand the description to mention evaluation/monitoring capabilities explicitly, though current coverage is strong.
Loading SKILL.md…