← Back to Tools · Browse opensource tools

LoRA Training

Checked 54m agoLink OKFree plan available

Low-Rank Adaptation training technique for efficiently fine-tuning large language models by training minimal parameters. Dramatically reduces the number of trainable parameters from billions to thousands. Reduces memory requirements and speeds up training. Maintains nearly equivalent model quality compared to full fine-tuning. Reference implementation by Microsoft with broad adoption. Fundamental technique for making model customization accessible. Open source.

Learn more in this category

Browse tasks in this category · Category overview

Comments

  • Loading...