01Memory-efficient fine-tuning with native LoRA and QLoRA support
02Clean, single-file model implementations for research and education
033,983 GitHub stars
04Integrated workflows for pretraining, quantization, and GGUF conversion
05Access to 20+ pretrained architectures including Llama, Gemma, and Phi
06Scalable multi-GPU training support via FSDP and Lightning AI