01Query multiple OpenAI-compatible LLM providers simultaneously
02Maintain conversation context across multiple messages and providers
03Receive responses from all configured LLMs at once (Duck Council)
04Cache API responses to avoid duplicate calls
05Monitor LLM provider health with automatic failover
0636 GitHub stars