Technique Aktualisiert 2026-04
LoRA
Low-Rank Adaptation
Definition
LoRA is an efficient fine-tuning technique that adapts an AI model by only adjusting a fraction of its parameters, drastically reducing cost.
Siehe auch im Glossar
F
Fine-tuning
Fine-tuning is the process of retraining an existing AI model on a specific dataset to adapt it to a particular domain or task.
L
LLM (Large Language Model)
An LLM is an AI model trained on billions of texts, capable of understanding and generating human language.
F
Foundation Model
A foundation model is a large AI model pre-trained on massive data, adaptable to multiple tasks.
T
Text-to-Image
Text-to-Image refers to generating images from text descriptions using generative AI models.
Tools, die lora verwenden
Häufig gestellte Fragen
Why LoRA instead of classic fine-tuning?
Classic fine-tuning adjusts all model parameters (billions). LoRA only adjusts a small low-rank matrix — 100 to 1000x less computation, consumer GPU friendly.
Is LoRA only for images?
No. LoRA works on LLMs (text) and diffusion models (images). On Stable Diffusion, LoRAs are very popular for learning styles or characters.