The EU AI Act (EU Artificial Intelligence Act) is a comprehensive European Union regulation that establishes legal obligations based on the risk level of AI systems. It classifies AI into four tiers — "unacceptable risk," "high risk," "limited risk," and "minimal risk" — imposing stricter requirements as the risk level increases.
The EU AI Act was established as the world's first comprehensive legal framework targeting AI. Just as the GDPR effectively set the international standard for data protection, the "Brussels Effect"—whereby the EU proactively shapes global standards—is anticipated to extend to the domain of AI regulation as well.
The regulation applies to businesses that provide or use AI systems within the EU. Even companies based outside the EU are subject to it if the outputs of their AI affect EU citizens. Companies in Japan or Thailand that provide services to the EU cannot treat this as someone else's concern.
A four-tier risk classification forms the backbone of the regulation.
AI systems categorized as posing "unacceptable risk" are prohibited in principle. Social scoring (systems that assign scores to citizens based on their behavior) and real-time facial recognition in public spaces fall into this category.
"High-risk" AI covers systems used in areas that directly affect people's rights or safety, such as recruitment screening, credit assessment, and medical devices. Obligations include maintaining technical documentation, ensuring data governance, and establishing human oversight mechanisms.
Providers of general-purpose AI models (such as GPT and Claude) are subject to separate transparency obligations, including disclosure of an overview of training data, compliance with copyright law, and publication of technical documentation.
In the context of AI governance, the EU AI Act is not the only regulation to consider. Thailand's PDPA regulates AI input and output data from a data protection perspective, while Japan's AI Business Guidelines function as soft law, encouraging voluntary efforts by businesses.
In practice, rather than addressing each of these regulations in isolation, companies are better served by building an integrated AI governance framework and taking an approach that maps the requirements of each regulation accordingly.


Knowledge and skills to understand the basic concepts, limitations, and risks of AI, and to appropriately utilize it in the workplace. Organizations are required to ensure this under the EU AI Act.

AI governance refers to the organizational policies, processes, and oversight mechanisms that ensure ethics, transparency, and accountability in AI system development and operation.

Shadow AI refers to the collective term for AI tools and services used by employees in their work without the approval of the company's IT department or management. It carries risks of information leakage and compliance violations.


What is AI Governance? A Practical Guide from EU AI Act Compliance to Internal Policy Development

An autonomous AI agent that takes on a specific business role and continuously performs tasks in the same manner as a human employee. It differs from conventional AI assistants in that it holds a defined scope of responsibility as a job function, rather than simply responding to one-off instructions.