Data framework for building LLM applications with RAG. Specializes in document ingestion (300+ connectors), indexing, and querying. Features vector indices, query engines, agents, and multi-modal support. Use for document Q&A, chatbots, knowledge retrieval, or building RAG pipelines. Best for data-centric LLM applications.
7.6
Rating
0
Installs
AI & LLM
Category
Exceptional skill documentation for LlamaIndex. The description clearly articulates when to use this skill (RAG applications, document Q&A, multi-source ingestion), making it easy for a CLI agent to determine invocation. Task knowledge is comprehensive with working code examples covering all major use cases: basic RAG, agents, chat engines, vector stores, multi-modal, and evaluation. Structure is excellent with clear sections, a comparison table vs LangChain, and references to separate guides for advanced topics. Novelty is strong - building production RAG systems with 300+ connectors, proper chunking, metadata filtering, and evaluation would require significant tokens and expertise from a general CLI agent. The skill meaningfully reduces complexity for data-centric LLM applications. Minor improvement possible: the SKILL.md is detailed but well-organized; referenced files likely contain even deeper technical details, maintaining good separation of concerns.
Loading SKILL.md…