
For companies operating in Thailand and implementing AI, compliance with the Personal Data Protection Act (PDPA) is unavoidable. PDPA violations can result in fines of up to 5 million baht and imprisonment, and regulatory enforcement continues to intensify. This article provides a checklist for achieving both AI utilization and compliance across each stage of data collection, processing, and storage. It is intended as a practical guide for IT managers, legal officers, and executives when reviewing their organization's AI initiatives.
Disclaimer: This article is for informational purposes only and does not constitute legal advice. For specific compliance measures, please consult a lawyer or law firm well-versed in Thai law.
Thailand's Personal Data Protection Act (PDPA: Personal Data Protection Act B.E. 2562) is a law that establishes comprehensive rules governing the collection, use, and disclosure of personal data in Thailand. Since AI systems process large volumes of personal data, they frequently intersect directly with the provisions of the PDPA. The following outlines the scope of application and the key obligations that should be particularly noted in AI operations.
The PDPA applies to all organizations that collect, use, or disclose personal data within Thailand. Even businesses operating outside Thailand are subject to the PDPA if they offer goods or services to data subjects in Thailand, or monitor the behavior of individuals within Thailand.
Checklist for Determining Applicability:
If any of the above apply, there is a high likelihood that you are subject to the PDPA.
Overview of Penalties:
| Type | Maximum |
|---|---|
| Administrative fine | Up to 5 million Baht |
| Criminal fine | Up to 5 million Baht |
| Imprisonment | Up to 1 year |
| Civil damages | Up to 2x the actual damages |
The PDPC (Personal Data Protection Committee) is stepping up its enforcement efforts, with the primary grounds for action being inadequate security measures, failure to appoint a DPO (Data Protection Officer), deficiencies in contracts with data processors, and delays in reporting data breaches. The assumption that "we probably won't get caught" is no longer a viable position to hold.
There are six key PDPA obligations that require particular attention when operating AI systems.
1. Ensuring a Lawful Basis for Collection
Collecting personal data requires either consent or a statutory exception (such as contract performance, legitimate interests, or legal obligations). Even when using data for AI training, it must be confirmed that such use aligns with the purpose stated at the time of collection.
2. Purpose Specification and Use Limitation
Collected personal data may only be used within the scope of the purposes previously notified. If "AI model training" was not included in the original collection purpose, additional consent must be obtained.
3. Notification Obligations to Data Subjects
Data controllers must notify data subjects in advance of the purpose of collection, the retention period, and the rights of data subjects. Where AI-based processing is involved, this should be explicitly stated in the privacy policy.
4. Strict Management of Sensitive Data
Sensitive data — including race, ethnicity, political opinions, religion, biometric data, and health information — may not be collected without explicit consent. The use of facial recognition or voice recognition in AI systems may directly conflict with this provision.
5. Contractual Obligations with Data Processors (Article 40)
When outsourcing data processing to an external AI service provider, a written contract incorporating PDPA requirements is necessary. The scope of processing, security measures, and data breach reporting obligations must be clearly defined.
6. Implementation of Security Measures
Technical and organizational measures are required to prevent the leakage, loss, or unauthorized access of personal data. Where an AI model retains large volumes of personal data, implementing encryption and access controls is essential.

The starting point of any AI project is data collection. Under PDPA, "lawfulness at the time of collection" forms the foundation for all subsequent processing — meaning that any deficiencies at this stage will leave legal risks lingering no matter how many measures are taken in later steps.
Checklist:
Example of Non-Compliance: Embedding a checkbox stating "I consent to data analysis including AI processing" at the end of the Terms of Service. Under the PDPA, consent must be obtained in a "clearly distinguishable form," and mixing consent clauses with other clauses risks invalidating the consent.
Items to Include in the Privacy Policy:
The Privacy Policy must include, at a minimum, the following:
When using personal data to train AI models, the central question is whether such use falls within the scope of the purpose declared at the time of collection.
Checklist:
The difference between anonymization and pseudonymization:
In the context of the PDPA, a clear distinction must be drawn between "anonymization" and "pseudonymization."
When using pseudonymized data for AI training, continued compliance with the provisions of the PDPA is required. The assumption that "hashing makes it safe" is dangerous. Even if the original data cannot be restored from a hash value, there are many cases where re-identification is possible by cross-referencing with other data. The sufficiency of anonymization should be carefully evaluated taking into account available technologies and the accessibility of additional information.

After collecting data, the process moves into the AI processing and analysis phase. Here, the focus is on whether the scope of processing deviates from the purpose at the time of collection, and whether the rights of data subjects are being violated.
The PDPA does not contain an explicit provision equivalent to Article 22 of the EU's GDPR, which establishes the "right to object to decisions based solely on automated processing." However, this does not mean that profiling is entirely unregulated.
Checklist:
While not an explicit obligation under the PDPA, a mechanism for final human judgment is strongly recommended as a practical risk mitigation measure from an Accountability standpoint. Since the GDPR explicitly enshrines the right to object to automated decision-making, the possibility that Thailand will introduce a similar provision in the future cannot be ruled out. Proactively designing a human oversight framework also serves as a safeguard against the risk of regulatory change.
It should also be noted that the PDPC has published guidelines requiring organizations that process data involving profiling to appoint a DPO. Organizations conducting AI-based profiling should give serious consideration to appointing a DPO.
PDPA stipulates that the collection of personal data should be "limited to what is necessary." This principle can easily come into conflict with AI, which tends to improve in accuracy the more data it is fed.
Checklist:
Approaches to balancing data minimization with AI accuracy:
The notion that "more data is better" remains deeply ingrained in AI development, yet not every data item contributes equally to predictive accuracy. By combining the following technical methods, it is possible to maintain model utility while limiting the use of personal information.
Each of these approaches involves technical trade-offs. The balance between accuracy requirements and compliance requirements must be assessed individually for each use case.

After AI model operations have commenced, ongoing management of retention periods and responses to the exercise of data subject rights continue to be required. In particular, since regulations surrounding cross-border data transfers remain fluid, the latest developments must be closely monitored.
Retention Period Checklist:
Cross-Border Transfer Checklist:
When using overseas AI services (cloud APIs, SaaS-based AI tools, etc.), personal data is transferred outside the country. Compliance measures based on PDPA Articles 28 and 29 are required.
The PDPC has not yet published an adequacy decision list (as of the time of writing). Therefore, establishing SCCs or BCRs currently represents the standard practical approach. For SCCs, the common practice is to use the ASEAN Model Contractual Clauses or the EU Standard Contractual Clauses as a base, supplemented with Thailand-specific requirements (such as data breach notification within 72 hours).
The PDPA grants data subjects multiple rights. When operating AI systems, it is necessary to design in advance how to handle the exercise of these rights.
Checklist:
AI-Specific Challenges — The Right of Erasure and Trained Models:
"Machine Unlearning" — the complete removal of a specific individual's data from a trained AI model — is a technically evolving field. The following are practical approaches to consider:
In all cases, it is important to honestly explain to the data subject both the measures taken and any technical limitations. Rather than simply responding with "it cannot be done," the appropriate approach is to present alternative solutions and proceed in a manner that the data subject can accept.

In addition to the key items on the checklist, there are points that tend to be overlooked in practice. In particular, the use of external AI services and the auditing of internal AI tools are prone to becoming blind spots.
When introducing generative AI services or image/speech recognition APIs into business operations, your organization will often act as the Data Controller, while the service provider acts as the Data Processor.
Checklist:
One aspect that tends to be overlooked is employees' day-to-day use of generative AI. Cases where customers' personal data is entered into prompts are more common than one might expect. It is essential to establish internal usage guidelines and explicitly document rules around the input of personal data (e.g., prohibition or mandatory anonymization). Continuing to use these tools unconsciously because they are "convenient" — only for the issue to surface during a PDPC investigation — is a situation you cannot afford to find yourself in.
The PDPA imposes record-keeping obligations on data controllers regarding data processing activities. When operating AI tools in-house, fulfilling these obligations in practice presents a real operational challenge.
Checklist:
Audit logs also serve as evidence when responding to investigation requests from the PDPC or to rights exercise requests from data subjects. It is advisable to set the log retention period to at least the equivalent of the data retention period.
In practice, manually managing all usage logs for AI tools is not realistic. It is recommended to build in a mechanism at the API gateway or proxy layer to automatically record request/response metadata (timestamps, user IDs, and processing purposes).

Q1: What are the differences between PDPA and GDPR? What should be noted from an AI utilization perspective?
The PDPA was enacted with reference to the GDPR, but there are several important differences. The most significant distinction is that the PDPA does not contain an explicit provision equivalent to Article 22 of the GDPR, which establishes the "right not to be subject to a decision based solely on automated processing." However, this does not mean that automated processing by AI is permitted without restriction. Since the PDPA does include the right to object and the right to oppose processing based on legitimate interests, careful consideration is still required when designing AI operational frameworks.
Q2: If an employee inputs personal data into a generative AI tool for work purposes, does this constitute a PDPA violation?
It depends on the circumstances. If personal data entered into a prompt is transmitted to the service provider's servers, this may constitute "disclosure" of personal data. If the purpose stated at the time of collection did not include "processing by external AI services," there is a risk of a PDPA violation as a use beyond the original purpose. Countermeasures include establishing an internal policy prohibiting the input of personal data, anonymizing data when using APIs, and enabling data non-retention settings under enterprise plans.
Q3: Does anonymization place data outside the scope of the PDPA?
If true "anonymization"—a state in which individuals can no longer be identified—is achieved, the data falls outside the scope of the PDPA. However, "pseudonymization" (a state in which identification remains possible when combined with additional information) remains within the scope of the PDPA. Simple hashing alone may be insufficient to qualify as anonymization. The adequacy of anonymization must be assessed on a case-by-case basis, taking into account available technologies and the accessibility of additional information.
Q4: What measures are required when using cloud-based AI services located outside Thailand?
Compliance measures based on Article 28 (adequacy decisions) or Article 29 (appropriate safeguards) of the PDPA are required. Since the PDPC has not yet published an adequacy decision list, the practical recommendation is to establish Standard Contractual Clauses (SCCs) or Binding Corporate Rules (BCRs). It is important to incorporate data protection clauses into contracts with AI service providers, and to clearly define the location of data processing, security measures, and data breach reporting procedures.

Thailand's PDPA does not prohibit the use of AI. By securing an appropriate legal basis, clarifying purposes, protecting data subjects' rights, and implementing security measures, compliance and AI utilization can be fully reconciled.
The key points of the checklist introduced in this article are summarized below.
Enforcement by the PDPC is intensifying year by year. Use this checklist to regularly review your organization's compliance status, and build a framework that enables rapid adaptation to legislative amendments and new guidelines. For specific implementation measures, it is strongly recommended to consult a law firm well-versed in Thai law and receive advice tailored to your organization's circumstances.
Yusuke Ishihara
Started programming at age 13 with MSX. After graduating from Musashi University, worked on large-scale system development including airline core systems and Japan's first Windows server hosting/VPS infrastructure. Co-founded Site Engine Inc. in 2008. Founded Unimon Inc. in 2010 and Enison Inc. in 2025, leading development of business systems, NLP, and platform solutions. Currently focuses on product development and AI/DX initiatives leveraging generative AI and large language models (LLMs).