The EU AI Act (EU Artificial Intelligence Act) is a comprehensive European Union regulation that establishes legal obligations based on the risk level of AI systems. It classifies AI into four tiers — "unacceptable risk," "high risk," "limited risk," and "minimal risk" — imposing stricter requirements as the risk level increases.
## The World's First Comprehensive AI Regulation The EU AI Act was established as the world's first comprehensive legal framework targeting AI. Just as the GDPR effectively set the international standard for data protection, the "Brussels Effect"—whereby the EU proactively shapes global standards—is anticipated to extend to the domain of AI regulation as well. The regulation applies to businesses that provide or use AI systems within the EU. Even companies based outside the EU are subject to it if the outputs of their AI affect EU citizens. Companies in Japan or Thailand that provide services to the EU cannot treat this as someone else's concern. ## The Risk Classification Framework A four-tier risk classification forms the backbone of the regulation. AI systems categorized as posing "unacceptable risk" are prohibited in principle. Social scoring (systems that assign scores to citizens based on their behavior) and real-time facial recognition in public spaces fall into this category. "High-risk" AI covers systems used in areas that directly affect people's rights or safety, such as recruitment screening, credit assessment, and medical devices. Obligations include maintaining technical documentation, ensuring data governance, and establishing human oversight mechanisms. Providers of general-purpose AI models (such as GPT and Claude) are subject to separate transparency obligations, including disclosure of an overview of training data, compliance with copyright law, and publication of technical documentation. ## Relationship with Other Regulations In the context of AI governance, the EU AI Act is not the only regulation to consider. Thailand's PDPA regulates AI input and output data from a data protection perspective, while Japan's AI Business Guidelines function as soft law, encouraging voluntary efforts by businesses. In practice, rather than addressing each of these regulations in isolation, companies are better served by building an integrated AI governance framework and taking an approach that maps the requirements of each regulation accordingly.


AI governance refers to the organizational policies, processes, and oversight mechanisms that ensure ethics, transparency, and accountability in AI system development and operation.

Ambient AI refers to an AI system that is seamlessly embedded in the user's environment, continuously monitoring sensor data and events to proactively take action without requiring explicit instructions.

An AI agent is an AI system that autonomously formulates plans toward given goals and executes tasks by invoking external tools.


Thailand PDPA Compliance Checklist: Balancing Regulatory Requirements with AI Utilization