Comportement Aktualisiert 2026-04

AI Hallucination

Definition

An AI hallucination is a response generated by an AI model that appears plausible but is factually incorrect or fabricated.

Häufig gestellte Fragen

Why do LLMs hallucinate?
Because they generate text by predicting the most probable word, not the most true word. They have no concept of truth — only statistical plausibility.
How to avoid hallucinations?
RAG (connecting AI to verified sources), human verification, and using tools like Perplexity (which cites sources) significantly reduce the risk.