← Back to Projects
Multi-Agent AI Framework
PythonC++Hugging Facellama.cppOpenAI APIGoogle GeminiBLIP-2CLIPWhisperStreamlitNext.jsReact
Overview
An orchestration system where a MainAgent routes tasks to LLMAgent, VisionAgent and ToolAgent with backend-aware execution (API, llama.cpp, GPU) and multi-modal I/O.
A modular, extensible multi-agent AI framework. A general-purpose LLM MainAgent interprets user intent and dispatches tasks to specialized SubAgents (LLMAgent, VisionAgent, ToolAgent). A BackendSelector weighs latency, cost, and resources to pick between cloud APIs, local llama.cpp, or GPU backends. The system supports text, images, and voice, enables task chaining, provides a verbose debug mode with structured logs, and exposes a lightweight UI for demos.
Features
- MainAgent intent parsing and dynamic task routing
- Backend-aware execution: API vs local CPU (llama.cpp) vs GPU
- Multimodal input: text, images, voice; task chaining support
- Verbose/debug logging with routing and backend decisions
- Optional fine-tuning pipeline & model registry
- Lightweight demo UI (CLI or web) for interaction