01Multi-Model support (WarpGBM GPU and LightGBM CPU)
02Smart caching for sub-100ms inference using artifact_id
03Export portable joblib model artifacts for use anywhere
04Native integration with Model Context Protocol (MCP) for AI agents
05Stateless architecture ensures no data storage and user ownership of models
061 GitHub stars