Technique
LoRA (Low-Rank Adaptation)
A fine-tuning technique that adapts large models by training small low-rank matrices instead of updating all the original weights.
Technique
A fine-tuning technique that adapts large models by training small low-rank matrices instead of updating all the original weights.
We use cookies
Anonymous analytics help us improve the site. You can opt out anytime. Learn more