Prompt engineering is the practice of designing the structure, phrasing, and context of input text (prompts) in order to elicit desired outputs from LLMs (Large Language Models).
LLMs operate by predicting the continuation of input text. Even for the same question, whether or not you include preconditions or output format specifications can significantly affect the accuracy and usefulness of the response. Simply specifying something like "provide three suggestions in bullet points in Japanese" yields far more practical answers than asking a question vaguely.
Zero-shot / Few-shot: Giving instructions without examples is Zero-shot; providing one to several concrete examples is Few-shot. Few-shot tends to produce more consistent results for classification tasks and format specification.
Chain-of-Thought (CoT): A technique that makes the model explicitly show its reasoning process by instructing it to "think step by step." It is known to improve accuracy on math and logic problems.
Role Prompting: Assigning a role such as "You are a senior engineer." Used to control the tone and level of expertise in the output.
Around 2025, attention shifted beyond one-off prompt design toward how to structure the information (context) passed to an AI system as a whole. This area is called context engineering, and it encompasses the injection of external knowledge via RAG, the structuring of tool definitions, and the management of conversation history. Prompt engineering is increasingly regarded as just one component within it.
That said, the fundamental principles of prompting—clear instructions, appropriate examples, and output format specification—also form the foundation of context engineering. You cannot design an entire system without first mastering the basics.


A prompting technique that improves accuracy on complex tasks by having the LLM explicitly generate intermediate reasoning steps.

Harness engineering is a methodology for designing structural constraints—such as prompts, tool definitions, and CI/CD pipelines—to prevent AI agents from malfunctioning.

LLM (Large Language Model) is a general term for neural network models pre-trained on massive amounts of text data, containing billions to trillions of parameters, capable of understanding and generating natural language with high accuracy.

What is Harness Engineering? A Design Method to Structurally Prevent AI Agent Errors

Context Engineering is a technical discipline focused on systematically designing and optimizing the context provided to AI models — including codebase structure, commit history, design intent, and domain knowledge.