01Optimized local LLM inference with MLX Runtime for Apple Silicon
02MCP Server and Remote MCP Providers for tool sharing and aggregation
03OpenAI, Anthropic, and Ollama API compatibility for local and remote models
04Autonomous Work Mode with issue tracking, parallel tasks, and file operations
05Customizable AI Agents with unique prompts, tools, themes, and memory
063,564 GitHub stars