Build AI gateway services for routing and managing LLM requests. Use when implementing API proxies, rate limiting, or multi-provider AI services.
3.7
Rating
0
Installs
AI & LLM
Category
The skill provides basic provider switching logic and configuration examples for multiple LLM providers (Ollama, Anthropic, HuggingFace). However, the description promises 'AI gateway services for routing and managing LLM requests' with features like 'API proxies, rate limiting, or multi-provider AI services,' but the implementation only shows basic provider selection via environment variables—not true gateway functionality like request routing, rate limiting, load balancing, or proxy services. The taskKnowledge dimension scores higher due to concrete code examples and configuration patterns. Structure is adequate for the limited scope but could benefit from separation of concerns if expanded. Novelty is moderate since the actual implementation is straightforward environment-based switching that a CLI agent could implement directly, lacking the complex gateway features (rate limiting, request queuing, failover, monitoring) that would justify a dedicated skill.
Loading SKILL.md…