
Now that AI systems are deeply involved in business decision-making, the absence of governance directly leads to legal risk and loss of trust. The EU AI Act, which has already been fully enforced, includes extraterritorial application and is not irrelevant to companies operating in Thailand and Southeast Asia. This article explains the fundamental concepts of AI governance, practical compliance with the EU AI Act, and the concrete steps for establishing internal rules. The goal is for practitioners who are unsure of "where to even begin" to be ready to start building their organization's governance framework by the time they finish reading.
Disclaimer: This article is intended for informational purposes only and does not constitute legal advice. Please consult a qualified attorney for specific legal matters.

AI governance is an organizational framework for managing risks and ensuring fairness, transparency, and accountability in the development, deployment, and operation of AI systems. While traditional IT governance frameworks such as COBIT and ITIL focus on infrastructure and service management, AI governance is fundamentally distinct in that it includes the model's decision-making process itself as a subject of oversight.
| Perspective | IT Governance | AI Governance |
|---|---|---|
| Managed Objects | Infrastructure, networks, applications | Models, training data, inference results |
| Nature of Risk | Availability, confidentiality, integrity | Bias, hallucination, inexplicability |
| Rate of Change | Planned release cycles | Behavior changes unpredictably with model updates |
| Accountability | Developers and operators are clearly defined | Distributed among data providers, model developers, and users |
| Regulatory Framework | ISO 27001, SOC 2 | EU AI Act, NIST AI RMF, ISO 42001 |
Attempting to manage AI as an extension of IT governance risks overlooking a fundamental difference: the probabilistic nature of model outputs. Server uptime can be defined as 99.9%, but the "correctness" of an AI model is context-dependent — the same input can yield different outputs.
The business use of generative AI is expanding rapidly, and the need for governance has shifted from a theoretical to a practical concern. In our AI consulting work in Thailand, we are increasingly hearing cases such as "employees were inputting customer data into ChatGPT" and "AI-generated contracts were sent out without noticing errors."
The risks of absent governance can be broadly categorized into three areas:

The EU AI Act (Artificial Intelligence Act) is the world's first comprehensive AI regulatory law and, like the GDPR, has extraterritorial application. Companies that provide AI services to users within the EU are subject to regulation regardless of where their headquarters are located.
The core of the EU AI Act lies in classifying AI systems into four risk levels and imposing different obligations on each.
1. Prohibited (Unacceptable Risk) Social credit scoring, real-time remote biometric identification (with exceptions for law enforcement), emotion recognition (in workplaces and educational institutions), and subliminal techniques that manipulate vulnerable groups. Penalties for violations can reach up to €35 million or 7% of global annual turnover.
2. High Risk Recruitment and HR evaluation, credit scoring, educational admissions decisions, safety management of critical infrastructure, law enforcement, immigration control, and more. Conformity assessments, CE marking, and registration in the EU database are required.
3. Limited Risk Chatbots, deepfake generation, and similar applications. Transparency obligations apply (i.e., disclosure that the system is AI).
4. Minimal Risk Spam filters, gaming AI, and the like. No special obligations.
In my conversations with companies across Southeast Asia, the most common source of confusion is determining which category their own AI systems fall under. For example, an internal recruitment screening tool would be classified as "High Risk," whereas an internal chatbot would remain in the "Limited Risk" category. It is easy to overlook the fact that even within the same umbrella of "AI tools," the level of regulation differs depending on the use case.
The EU AI Act is being enforced in phases, with different application timelines depending on the type of AI system covered.
| Target | Content |
|---|---|
| Prohibited AI | Prohibition provisions such as social credit scoring |
| General-Purpose AI (GPAI) | Obligation for providers to disclose technical documentation and copyright policies |
| High-Risk AI (Annex III) | Conformity assessment, CE marking, and EU database registration |
| Annex I product-embedded AI | Integrated obligations with product safety regulations |
Providers of general-purpose AI models (GPT, Claude, Gemini, etc.) are required to prepare technical documentation, disclose copyright policies, and publish summaries of training data. Companies that use these models are also subject to obligations, including disclosure requirements that AI is being used (for limited risk or above), as well as a duty to cooperate with conformity assessments for high-risk applications.
"We're not based in the EU, so it doesn't apply to us" — this is a misconception. Extra-territorial application occurs in the following cases:
If a Thai export company uses an AI-based quality inspection system for EU buyers, that system may fall under the scope of the EU AI Act. Among our own clients, we are seeing a growing number of cases where companies operating e-commerce platforms for European markets come to us with questions about disclosure obligations for AI recommendations.
Furthermore, the Thai government is also considering its own AI regulations. Thailand's MDES (Ministry of Digital Economy and Society) has published AI governance guidelines, and legislative development referencing the EU AI Act is underway. By proactively working toward EU AI Act compliance now, companies will also be well-positioned to smoothly adapt to future domestic regulations in Thailand.

The EU AI Act is not the only guideline for AI governance. It is necessary to understand multiple frameworks and select the one that best suits your organization's situation.
| Framework | Issuer | Nature | Key Features |
|---|---|---|---|
| EU AI Act | EU | Legally binding | Risk-based classification, fines applicable |
| NIST AI RMF | U.S. NIST | Voluntary guidelines | Four functions: Map, Measure, Manage, Govern |
| ISO/IEC 42001 | ISO | Certification standard | International standard for AI management systems |
| Singapore IMDA AI Governance | Singapore Government | Guidelines | Highly practical for the ASEAN region |
| Thailand AI Ethics Guideline | Thailand MDES | Guidelines | Targeted at Thai domestic companies, centered on ethical principles |
For small and medium-sized enterprises, pursuing ISO 42001 certification from the outset is not cost-effective. A more rational approach is to first organize internal processes based on the NIST AI RMF, then address EU AI Act high-risk requirements as needed.
There are three criteria for selection:
In practice, frameworks are something to be "combined" rather than "chosen." A realistic roadmap would be to use the EU AI Act's risk classification as a foundation, apply the NIST AI RMF's management processes, and then formalize everything with ISO 42001 in the future.

Understanding frameworks alone is not enough. To make governance actually function within an organization, it is necessary to establish three elements: policy, process, and people. The following outlines a step-by-step approach.
The first step is to understand which AI tools are being used internally, by whom, and for what purpose. "Shadow AI" (AI usage that the IT department is unaware of) has become a problem in many companies.
Inventory Checklist:
In the case of one manufacturing client, an inventory revealed that 42 AI tools were in use across 17 departments, of which 28 were unknown to the IT department. There was even a case where an image inspection AI independently introduced by the quality control department was being used for products destined for the EU — a situation in which, without an inventory, it was impossible to even begin risk classification under the EU AI Act.
Based on the inventory results, assess the risk level of each AI use case and formulate an internal AI policy.
Items to include in the AI policy:
| Category | Specific Items |
|---|---|
| Scope of Use | Permitted AI tools, prohibited use cases (e.g., fully automated hiring decisions) |
| Data Handling | Classification criteria for data that may be input into AI, anonymization requirements for personal information |
| Quality Control | Standards for human review of AI outputs, frequency of accuracy monitoring |
| Transparency | Standards and methods for disclosing AI use to customers and business partners |
| Accountability | Decision-makers for AI-related matters, escalation flow in the event of an incident |
| Training | AI literacy training for all employees, specialized training by department |
The policy does not need to be perfect from the start. A practical approach is to first clarify what must not be done, then refine the details incrementally. An initial version that is concise enough to fit on 2–3 A4 pages is sufficient.
A responsible organizational structure is essential for making policies effective. Below are three models based on company size.
Small companies (~50 employees): The existing IT manager concurrently handles AI governance. AI usage is reviewed on a monthly basis.
Mid-sized companies (50–500 employees): An AI governance committee is established, composed of representatives from IT, legal, and business units, with policies reviewed quarterly.
Large companies (500+ employees): A dedicated AI Governance Officer (Chief AI Officer) is appointed. An AI ethics committee is established, and a pre-approval process is applied to the introduction or modification of high-risk AI.
The most critical aspect of organizational design is ensuring that AI governance does not become the sole responsibility of the IT department. The impact of AI spans multiple functions, including business decisions, legal, HR, and marketing. Without executive-level sponsorship, governance risks becoming a mere formality.
A governance framework is not something you build once and forget. AI models degrade in accuracy over time (model drift), and the regulatory environment continues to evolve.
Elements of Continuous Monitoring:
For quarterly governance reviews, the following KPIs are recommended for tracking:

Based on practical experience, this article organizes the common pitfalls companies tend to fall into when implementing AI governance.
It is not uncommon for overly strict governance to discourage frontline staff from using AI. At one client, a three-stage approval process was introduced for AI tool adoption, resulting in an average approval time of 45 days—ultimately leading a department head to abandon AI adoption altogether, concluding that "Excel is good enough."
The purpose of governance is not to prohibit AI, but to promote its use while managing risk. A "risk-based" approach is key: low-risk AI use cases (such as translation, summarization, and code completion) should be available without prior approval, while review processes are applied only to high-risk applications.
A common pattern: an impressive AI policy document is created, but it never gets communicated internally and nobody reads it. The effectiveness of a policy depends on education and systematization.
AI governance sits at the intersection of technology and legal affairs. When pursued by the technical team alone, legal risks tend to be overlooked; when driven solely by legal, the resulting rules often prove technically unrealistic.
An effective solution is the role of an "AI governance translator" — someone who understands both technology and legal affairs and can communicate in the language of each. Where a dedicated individual is difficult to appoint, another approach is to form an AI governance task force by pairing one representative from the technical side with one from the legal side.
When we carry out AI governance support projects, we always ensure that representatives from three departments — IT, Legal, and Corporate Planning — are present at the initial workshop. The gap in understanding between departments is larger than one might expect; it is common to find a disconnect between senior management's concern that "AI is making decisions on its own" and the technical team's perception that "everything is controlled by rule-based logic."

Here is a checklist of items to verify when deploying and operating AI systems. It integrates the EU AI Act's high-risk requirements with NIST AI RMF best practices.
This checklist is not exhaustive and needs to be customized according to your organization's industry, size, and degree of AI usage. What matters more than the existence of the checklist itself is embedding within the organization the habit of reviewing it on a regular basis.

Answering frequently asked questions about AI governance.
It is necessary. However, the approach differs. Companies with fewer than 50 employees do not need to obtain ISO 42001 certification. At a minimum, simply formalizing two rules—"criteria for data that must not be entered into AI" and "a verification process for when AI output is used in final decision-making"—can significantly reduce risk. The EU AI Act also provides simplified compliance measures for small and medium-sized enterprises (regulatory sandboxes, guidance documents).
First, establish usage guidelines. Specifically:
A practical starting point is to consolidate these into a single A4 page and roll them out company-wide.
Three tiers of fines are established depending on the type of violation:
Proportionally lower caps apply to SMEs and startups. However, fines are not the only risk. Exclusion from the EU market, loss of trust from business partners, and reputational damage from media coverage may have an even greater impact on business operations.

AI governance is not "just for large corporations" — it is a foundational requirement for every organization that uses AI in its operations. With the EU AI Act now fully in force, the legal risks of operating without governance have become unambiguously clear.
3 actions you can start today:
Taking these three steps alone allows most organizations to take their first meaningful stride toward AI governance. There is no need to build a perfect framework from the outset. Starting small and expanding in response to incidents and regulatory changes is the most effective approach when resources are limited.
We support companies in Thailand and Southeast Asia in building AI governance frameworks. From inventory and policy development to organizational design, if you are interested in developing a roadmap tailored to your specific situation, please feel free to reach out.
Yusuke Ishihara
Started programming at age 13 with MSX. After graduating from Musashi University, worked on large-scale system development including airline core systems and Japan's first Windows server hosting/VPS infrastructure. Co-founded Site Engine Inc. in 2008. Founded Unimon Inc. in 2010 and Enison Inc. in 2025, leading development of business systems, NLP, and platform solutions. Currently focuses on product development and AI/DX initiatives leveraging generative AI and large language models (LLMs).