A prompting technique that improves accuracy on complex tasks by having the LLM explicitly generate intermediate reasoning steps.
## What is Chain of Thought Chain of Thought (CoT) is a prompting technique that improves accuracy on complex tasks by explicitly having an LLM generate intermediate reasoning steps. ### Understanding Through a Concrete Example For a problem such as "There are 3 apples and 5 oranges. What is the total?", instead of having the LLM answer "8" directly, it is guided to output the intermediate process: "3 apples + 5 oranges = 8." The difference is hard to notice with simple addition, but accuracy improves significantly for problems involving multi-step reasoning or conditional branching—such as determining whether legal requirements are satisfied. Simply adding "Please think step by step" to a prompt can be effective. This is called Zero-shot CoT. ### Relationship with Reasoning Models Reasoning models are designed with CoT built into the model itself, automatically generating a chain of thought without any prompting. On the other hand, CoT can also be elicited from standard LLMs through prompt engineering, so the practical approach is to first try it on the prompt side, and switch to a reasoning model if the accuracy is insufficient. One important caveat: CoT increases the number of output tokens, which raises costs. Rather than applying it to all requests, the smart operational approach is to limit its use to queries where accuracy is critical.


A2A (Agent-to-Agent Protocol) is a communication protocol that enables different AI agents to perform capability discovery, task delegation, and state synchronization, published by Google in April 2025.

Acceptance testing is a testing method that verifies whether developed features meet business requirements and user stories, from the perspective of the product owner and stakeholders.

A mechanism that controls task distribution, state management, and coordination flows among multiple AI agents.

What is Harness Engineering? A Design Method to Structurally Prevent AI Agent Errors

Agent Skills are reusable instruction sets defined to enable AI agents to perform specific tasks or areas of expertise, functioning as modular units that extend the capabilities of an agent.