01OpenAI-compatible API endpoint generation for local LLM inference
02Multi-instance support with isolated model storage and configuration
03Cross-container DNS integration via the overthink bridge network
04Automated GPU detection and driver-optimized container selection
05Full lifecycle management (start, stop, restart, status) for LocalAI instances
060 GitHub stars