01Supports extended context lengths for improved performance
02Implements 4-bit quantization for efficient training
03Enables exporting models to various formats (GGUF, Hugging Face, etc.)
04Provides a simple API for model loading, fine-tuning, and inference
051 GitHub stars
06Optimizes fine-tuning for Llama, Mistral, Phi, and Gemma models