Hallucination refers to the phenomenon in which an AI model generates information that is not based on facts as if it were correct. It stems from the mechanism by which LLMs generate "plausible" text from patterns in training data, and is considered difficult to eliminate entirely.
## Why It Is Called "Hallucination" Just as human hallucinations involve perceiving things that do not exist, AI hallucinations also generate "facts that do not exist." However, unlike human hallucinations in a fundamental sense, LLMs have no mechanism for determining whether something is factual. They simply generate the next most probable token in a chain, and the result may happen to align with reality or may be entirely fabricated. ## Typical Patterns Hallucinations take several characteristic forms. Representative examples include citations of non-existent papers (generating fictitious author names and DOIs), false biographical details attributed to real individuals, and the fabrication of plausible-sounding numerical data. What makes this particularly troublesome is that hallucinated output is grammatically correct and blends naturally into context. Obvious errors are easy to spot, but the pattern of being "90% correct and 10% false" is what makes detection so difficult. ## Mitigation Approaches The most promising mitigation approach at present is the adoption of RAG (Retrieval-Augmented Generation). Before the model generates a response, it retrieves relevant information from an external knowledge base and uses that information as the basis for its answer, thereby increasing the probability of factually grounded output. Another direction is the incorporation of HITL (Human-in-the-Loop). By designing a workflow in which humans review AI output, the risk of hallucinations making their way into final deliverables is reduced. In fields such as medicine and law, where the cost of misinformation is high, this combination is becoming the de facto standard.


A2A (Agent-to-Agent Protocol) is a communication protocol that enables different AI agents to perform capability discovery, task delegation, and state synchronization, published by Google in April 2025.

Acceptance testing is a testing method that verifies whether developed features meet business requirements and user stories, from the perspective of the product owner and stakeholders.

Agent Skills are reusable instruction sets defined to enable AI agents to perform specific tasks or areas of expertise, functioning as modular units that extend the capabilities of an agent.


Agentic AI is a general term for AI systems that interpret goals and autonomously repeat the cycle of planning, executing, and verifying actions without requiring step-by-step human instruction.