A technique that cross-references LLM outputs with external data sources and search results to generate factually grounded responses. A core method for reducing hallucinations.
Grounding is a technique that cross-references LLM outputs against external, trusted data sources to generate factually accurate responses. It is positioned as a core approach for reducing hallucinations (responses that are plausible but factually incorrect).
RAG (Retrieval-Augmented Generation) is the most common method for achieving grounding. By retrieving relevant information from external databases or documents and passing that information to the model as context, it becomes easier for the model to recognize what it does not know.
However, grounding is not complete with RAG alone. If the quality of retrieved results is poor, there is a risk of grounding the model on incorrect information.
Effective grounding is achieved through multiple layers.
Simply requiring the model to "respond with citations" significantly reduces the hallucination rate. However, since LLMs can also fabricate citations, it is advisable to design the system to include a post-processing mechanism that verifies whether the cited URLs actually exist.


Hallucination refers to the phenomenon in which an AI model generates information that is not based on facts as if it were correct. It stems from the mechanism by which LLMs generate "plausible" text from patterns in training data, and is considered difficult to eliminate entirely.

A prompting technique that improves accuracy on complex tasks by having the LLM explicitly generate intermediate reasoning steps.

RAG (Retrieval-Augmented Generation) is a technique that improves the accuracy and currency of responses by retrieving relevant information from external knowledge sources and appending the results to the input of an LLM.


How Thai Healthcare Providers Are Automating Foreign Patient Support with AI Chatbots