A design approach that structurally eliminates the risk of personal data leakage by physically and logically isolating AI systems and data processing infrastructure. Typical examples include tenant separation and on-premises operation.
The concept of "Privacy by Design" is already widely known — it is the principle of embedding privacy protection from the earliest stages of system design. Privacy by Isolation can be considered the most physical and direct approach among its implementation methods. By isolating data and systems, it creates a state in which leakage is "structurally impossible" in the first place.
Conventional privacy protection measures such as encryption and access control are effective as long as they are properly operated. However, it is difficult to completely eliminate human factors such as misconfigurations, privilege creep, and insider threats.
Isolation provides a structural answer to this problem. If data resides in a physically or logically separate domain, it remains unreachable even in the event of misconfigured access permissions. As regulations across countries — including the EU AI Act and the PDPA (Thailand's Personal Data Protection Act) — become increasingly stringent, isolation architecture also proves advantageous from the perspective of "ease of demonstrating" compliance.
In practice, isolation is applied at three primary levels of granularity.
Tenant Separation — The most common pattern in SaaS environments. Data is separated per customer using individual database schemas or instances, with logical isolation enforced through Row Level Security (RLS) or dedicated schemas. Balancing cost efficiency and isolation strength is the key design consideration.
On-Premises / VPC Separation — When handling highly sensitive data (such as medical records or financial transactions), systems are confined to dedicated on-premises environments or VPCs (Virtual Private Clouds) rather than shared cloud infrastructure. AI model inference is also executed within the same isolation boundary as the data.
Edge Processing — Processing is completed entirely on-device as edge AI, without transmitting data to the cloud. This is an effective approach for streaming data containing personal information, such as camera footage analysis and speech recognition.
The proliferation of generative AI has made isolation even more critical. New threats have emerged that were not anticipated by conventional database design alone — including the risk of data submitted as prompts to LLMs being used for model training, and the risk of context leaking across tenants.
As a practice of responsible AI, measures such as inference environment separation (assigning dedicated inference instances per tenant), non-persistence of prompt data, and validation of isolation boundaries through AI red teaming are now required.



An architecture that runs AI inference on-device rather than in the cloud. It enables low latency, privacy protection, and offline operation.

Shadow AI refers to the collective term for AI tools and services used by employees in their work without the approval of the company's IT department or management. It carries risks of information leakage and compliance violations.

AI governance refers to the organizational policies, processes, and oversight mechanisms that ensure ethics, transparency, and accountability in AI system development and operation.

An evaluation method that systematically tests AI system vulnerabilities from an attacker's perspective to proactively identify safety risks.

Knowledge and skills to understand the basic concepts, limitations, and risks of AI, and to appropriately utilize it in the workplace. Organizations are required to ensure this under the EU AI Act.