Technique Aktualisiert 2026-04
Model Distillation
Definition
Distillation transfers knowledge from a large model (teacher) to a smaller model (student), preserving performance at lower cost.
Siehe auch im Glossar
L
LLM (Large Language Model)
An LLM is an AI model trained on billions of texts, capable of understanding and generating human language.
S
SLM (Small Language Model)
An SLM is a compact language model optimized to run on local devices with targeted performance on specific tasks.
F
Fine-tuning
Fine-tuning is the process of retraining an existing AI model on a specific dataset to adapt it to a particular domain or task.
Q
Quantization
Quantization reduces the precision of numbers in an AI model to make it smaller and faster, with minimal quality loss.
Tools, die model distillation verwenden
D
DeepSeek
Das chinesische Open-Source-Modell auf GPT-4-Niveau
4.7/5
M
Mistral Le Chat
Die souveräne europäische KI – DSGVO-konform
4.5/5
O
OpenClaw
Der Open-Source-KI-Agent, der Ihre LLMs in autonome Arbeiter verwandelt
4.5/5
R
Replit
Cloud-IDE mit integrierter KI für das Programmieren von überall
4.5/5
Häufig gestellte Fragen
Why distill instead of fine-tune?
Fine-tuning adapts an existing model. Distillation creates a new smaller model that mimics a larger one. The result is faster and cheaper to run.
Does DeepSeek use distillation?
Yes. DeepSeek used distillation to create compact high-performing models, contributing to their unbeatable value ratio.