A system prompt is an instruction given to an LLM before the start of a conversation with a user, defining the AI's role, tone, constraints, and other parameters to control the overall behavior of the application.
A System Prompt is an instruction given to an LLM before the start of a conversation with a user, defining the AI's role, tone, constraints, and other parameters to control the overall behavior of an application.
When integrating generative AI into real services or workflows, product quality is not determined solely by the model's capabilities. Even when using the same model, the user experience can vary significantly depending on how the system prompt is designed. For a customer support AI chatbot, constraints such as "respond using polite formal language and do not answer questions outside the scope of the product warranty" can be embedded in advance. For a coding assistance tool, behaviors such as "output only TypeScript code and keep explanations to a minimum" can be specified. In this way, the system prompt functions as a "blueprint" that bridges the model and the actual application.
When constructing a conversation, most LLMs process messages by distinguishing them according to role. The typical structure is as follows:
The system prompt is placed in the system role and is fixed at the top of the conversation history. The model uses these instructions as a premise when interpreting and processing subsequent user messages. Because it occupies the beginning of the context window, it holds high priority within the model's attention mechanism.
When combined with RAG or tool calling (Function Calling), the system prompt also includes orchestration instructions such as "which tools to use and when" and "how to handle externally retrieved information." In the context of AI agents, it can be regarded as the central configuration file for agent orchestration.
The following are important perspectives to keep in mind when designing a system prompt.
Clarity and specificity: Rather than vague instructions like "please answer kindly," writing concretely—such as "keep responses to three sentences or fewer and always include supplementary explanations for technical terms"—tends to produce more consistent output.
Explicit constraints: Clearly stating what the model must not do helps suppress unintended outputs. Combining this with AI Guardrails enables more robust control.
Role definition: Assigning a role such as "you are an expert in ○○" is effective for standardizing the expertise and tone of responses.
At the same time, countermeasures against Prompt Injection are essential. This is an attack technique in which a malicious user attempts to override the constraints defined in the system prompt, and security design—particularly for systems that handle external input—should be carried out with reference to OWASP guidelines.
While prompt engineering is a technique for optimizing individual inputs, the system prompt serves to establish the "baseline" for the entire application. In recent years, the concept of context engineering has also emerged, spreading the idea of treating not just the system prompt but the entire context—including conversation history, external knowledge, and tool definitions—as the subject of design.
A system prompt is not something written once and left unchanged; it should be continuously refined through observation of actual outputs. Treating it as something to be managed and version-controlled as a team—from the PoC (Proof of Concept) stage through to production—is the most direct path to improving the quality of AI utilization.



A2A (Agent-to-Agent Protocol) is a communication protocol that enables different AI agents to perform capability discovery, task delegation, and state synchronization, published by Google in April 2025.

Acceptance testing is a testing method that verifies whether developed features meet business requirements and user stories, from the perspective of the product owner and stakeholders.

AES-256 is the highest-strength encryption algorithm using a 256-bit key length within AES (Advanced Encryption Standard), a symmetric-key cryptographic scheme standardized by the National Institute of Standards and Technology (NIST).

A mechanism that controls task distribution, state management, and coordination flows among multiple AI agents.

Agent Skills are reusable instruction sets defined to enable AI agents to perform specific tasks or areas of expertise, functioning as modular units that extend the capabilities of an agent.