Modèle Aktualisiert 2026-04
Foundation Model
Definition
A foundation model is a large AI model pre-trained on massive data, adaptable to multiple tasks.
Siehe auch im Glossar
L
LLM (Large Language Model)
An LLM is an AI model trained on billions of texts, capable of understanding and generating human language.
F
Fine-tuning
Fine-tuning is the process of retraining an existing AI model on a specific dataset to adapt it to a particular domain or task.
D
Deep Learning
Deep Learning is a subset of Machine Learning using multi-layered neural networks to learn complex representations from raw data.
T
Transformer
The Transformer is the neural network architecture powering all modern LLMs, invented by Google in 2017.
Tools, die foundation model verwenden
Häufig gestellte Fragen
Why are they called foundation models?
Because they serve as a foundation for multiple applications. A single model can be adapted for writing, code, analysis, etc.
Who creates foundation models?
OpenAI (GPT), Anthropic (Claude), Google (Gemini), Meta (Llama), Mistral. Training costs run into hundreds of millions of dollars.