Technique Aktualisiert 2026-04

LoRA

Low-Rank Adaptation
Definition

LoRA is an efficient fine-tuning technique that adapts an AI model by only adjusting a fraction of its parameters, drastically reducing cost.

Häufig gestellte Fragen

Why LoRA instead of classic fine-tuning?
Classic fine-tuning adjusts all model parameters (billions). LoRA only adjusts a small low-rank matrix — 100 to 1000x less computation, consumer GPU friendly.
Is LoRA only for images?
No. LoRA works on LLMs (text) and diffusion models (images). On Stable Diffusion, LoRAs are very popular for learning styles or characters.