Prompt engineering is the practice of designing the structure, phrasing, and context of input text (prompts) in order to elicit desired outputs from LLMs (Large Language Models).
## Why Does the Way You Write Prompts Change the Results? LLMs operate by predicting the continuation of input text. Even for the same question, whether or not you include preconditions or output format specifications can significantly affect the accuracy and usefulness of the response. Simply specifying something like "provide three suggestions in bullet points in Japanese" yields far more practical answers than asking a question vaguely. ## Representative Techniques **Zero-shot / Few-shot**: Giving instructions without examples is Zero-shot; providing one to several concrete examples is Few-shot. Few-shot tends to produce more consistent results for classification tasks and format specification. **Chain-of-Thought (CoT)**: A technique that makes the model explicitly show its reasoning process by instructing it to "think step by step." It is known to improve accuracy on math and logic problems. **Role Prompting**: Assigning a role such as "You are a senior engineer." Used to control the tone and level of expertise in the output. ## Evolution into Context Engineering Around 2025, attention shifted beyond one-off prompt design toward how to structure the information (context) passed to an AI system as a whole. This area is called context engineering, and it encompasses the injection of external knowledge via RAG, the structuring of tool definitions, and the management of conversation history. Prompt engineering is increasingly regarded as just one component within it. That said, the fundamental principles of prompting—clear instructions, appropriate examples, and output format specification—also form the foundation of context engineering. You cannot design an entire system without first mastering the basics.


A2A (Agent-to-Agent Protocol) is a communication protocol that enables different AI agents to perform capability discovery, task delegation, and state synchronization, published by Google in April 2025.

Acceptance testing is a testing method that verifies whether developed features meet business requirements and user stories, from the perspective of the product owner and stakeholders.

Agent Skills are reusable instruction sets defined to enable AI agents to perform specific tasks or areas of expertise, functioning as modular units that extend the capabilities of an agent.


How Thailand's Real Estate Industry Can Automate Property Inquiries from Foreign Buyers with AI Chatbots

Agentic AI is a general term for AI systems that interpret goals and autonomously repeat the cycle of planning, executing, and verifying actions without requiring step-by-step human instruction.