Skip to main content

LLM Hallucination

When a language model generates plausible-sounding but factually incorrect information, presenting fabricated claims as if they were established facts.

LLM hallucination refers to the tendency of language models to generate content that sounds authoritative but contains fabricated facts, invented citations, or incorrect claims. This happens because LLMs are trained to predict likely text sequences, not to verify truth. They can confidently state statistics that don't exist, attribute quotes to people who never said them, or describe studies that were never conducted. For content operations, hallucination risk is managed through the pipeline: research phases ground the content in real data, briefs constrain what the model should cover, and editorial review scoring checks factual accuracy. The accuracy dimension in review scoring specifically targets hallucination by evaluating whether claims are supported by the research that informed the brief.

RankDraft's review phase scores accuracy to catch hallucinated claims before publication.