"On the Loop" is a collaboration mode that focuses on improving the harness (operating environment, constraints, and tools) rather than individual outputs of AI agents, and represents the recommended human position in the practice of harness engineering.
The third mode recommended by Bockeler. Rather than delegating entirely like Outside the Loop, or checking every step like In the Loop, this approach focuses human energy on building the "environment" in which the agent can operate correctly. ### The Core Question: What to Fix The discipline of On the Loop is tested when frustration arises. When an agent's output contains a mistake, the most natural reaction is to fix the artifact directly. In On the Loop, however, that impulse is suppressed in favor of modifying the harness instead. Adding rules to CLAUDE.md, adjusting linter settings, adding test cases — these investments in the environment pay off not just for the single issue at hand, but for every subsequent output. Bockeler calls this virtuous cycle the "Agentic Flywheel." Improving the harness raises the quality of the agent's output; higher quality expands the range of tasks that can be delegated to the agent; a wider range reveals further opportunities to improve the harness. Eventually, the agent itself begins to suggest harness improvements, and a self-reinforcing system takes hold. ### The Difficulty of Staying On the Loop The concept is simple, but the practice requires discipline. In situations where fixing the bug directly is clearly faster, the decision to modify the harness instead carries a high psychological barrier. Whether the short-term efficiency versus long-term quality trade-off is understood and shared across the entire team is what determines whether this approach takes root.


"In the Loop" is a collaborative mode in which humans review and correct AI agent outputs one by one. While quality control is reliable, it tends to create a bottleneck where human review cannot keep pace with the agent's generation speed.

"Outside the Loop" is a collaboration mode in which humans specify only the outcome requirements and delegate all implementation details to AI agents; it is also known as vibe coding.

Harness engineering is a methodology for designing structural constraints—such as prompts, tool definitions, and CI/CD pipelines—to prevent AI agents from malfunctioning.


What is Harness Engineering? A Design Method to Structurally Prevent AI Agent Errors

HITL (Human-in-the-Loop) is an approach that incorporates into the design a process by which humans review, correct, and approve the outputs of AI systems. Rather than full automation, it establishes human intervention points based on the criticality of decisions, thereby ensuring accuracy and reliability.