Comportement Aktualisiert 2026-04
AI Hallucination
Definition
An AI hallucination is a response generated by an AI model that appears plausible but is factually incorrect or fabricated.
Siehe auch im Glossar
L
LLM (Large Language Model)
An LLM is an AI model trained on billions of texts, capable of understanding and generating human language.
R
RAG (Retrieval-Augmented Generation)
RAG is a technique that connects an LLM to external data sources to generate more accurate and up-to-date answers.
P
Prompt
A prompt is the instruction or question you give an AI to get a response. It's the interface between you and the model.
G
Generative AI
Generative AI refers to artificial intelligence systems capable of creating original content: text, images, video, audio, code.
Tools, die ai hallucination verwenden
Häufig gestellte Fragen
Why do LLMs hallucinate?
Because they generate text by predicting the most probable word, not the most true word. They have no concept of truth — only statistical plausibility.
How to avoid hallucinations?
RAG (connecting AI to verified sources), human verification, and using tools like Perplexity (which cites sources) significantly reduce the risk.