"In the Loop" is a collaborative mode in which humans review and correct AI agent outputs one by one. While quality control is reliable, it tends to create a bottleneck where human review cannot keep pace with the agent's generation speed.
One of the three collaboration modes between humans and AI agents (Outside the Loop / In the Loop / On the Loop) organized by Birgitta Böckeler, co-author with Martin Fowler. It refers to a way of working in which humans review each piece of code or output generated by an agent and directly correct any issues found. ### Why It Becomes a Bottleneck An agent can generate code in seconds, but human review takes anywhere from several minutes to tens of minutes. This asymmetry is the fundamental limitation of In the Loop. A queue of items awaiting review accumulates, making it impossible to fully leverage the agent's high-speed generation capabilities. In the author's own experience, it was not uncommon for an agent to create five PRs in 30 minutes, only for the reviews to take half a day. ### When In the Loop Is Appropriate That said, there are cases where In the Loop is effective. Infrastructure changes that directly affect production environments, authentication and authorization implementations involving security, financial or medical code subject to regulatory compliance—in these domains, the cost of having humans verify each step is worth paying. The key is to **not operate everything under In the Loop**. A practical balance is to apply In the Loop only to high-risk changes and shift everything else to On the Loop.


"On the Loop" is a collaboration mode that focuses on improving the harness (operating environment, constraints, and tools) rather than individual outputs of AI agents, and represents the recommended human position in the practice of harness engineering.

"Outside the Loop" is a collaboration mode in which humans specify only the outcome requirements and delegate all implementation details to AI agents; it is also known as vibe coding.

HITL (Human-in-the-Loop) is an approach that incorporates into the design a process by which humans review, correct, and approve the outputs of AI systems. Rather than full automation, it establishes human intervention points based on the criticality of decisions, thereby ensuring accuracy and reliability.


What is Harness Engineering? A Design Method to Structurally Prevent AI Agent Errors

Closing the "Invisible Attack Vector" in AI Chat — An Implementation Guide to Preventing Prompt Injection via DB

What is Human-in-the-Loop (HITL)? The Basics of "Human Participation" Design for Establishing AI-Driven Business Process Automation