Knowledge and skills to understand the basic concepts, limitations, and risks of AI, and to appropriately utilize it in the workplace. Organizations are required to ensure this under the EU AI Act.
AI literacy refers to the collective knowledge and skills required to understand the fundamental concepts, capabilities and limitations, risks, and ethical issues of AI, and to appropriately leverage AI in professional and everyday contexts.
AI literacy is not exclusive to engineers. A sales representative forwarding AI-generated output directly to a customer, or an accounting staff member including unverified AI-aggregated figures in a report—these risks can be prevented not through technical ability, but through knowing "the limitations of AI."
The EU AI Act, effective February 2025, requires organizations to ensure AI literacy. Providers and deployers of AI systems must establish training frameworks so that employees can perform their duties with a foundational understanding of AI risks.
There is no need to turn every employee into an AI engineer. A three-tiered approach is effective in practice.
Level 1 (All employees): Understanding what AI can and cannot do, awareness of hallucinations, and the risks of inputting confidential information
Level 2 (Department leaders): Workflow design for AI utilization, ROI evaluation, and foundational knowledge for vendor selection
Level 3 (AI promotion leads): Prompt engineering, RAG implementation, and evaluation metric design
The author believes the highest ROI comes from rolling out Level 1 company-wide in a half-day training session, then progressively offering Levels 2 and 3 to those who wish to advance.


The EU AI Act (EU Artificial Intelligence Act) is a comprehensive European Union regulation that establishes legal obligations based on the risk level of AI systems. It classifies AI into four tiers — "unacceptable risk," "high risk," "limited risk," and "minimal risk" — imposing stricter requirements as the risk level increases.

AI governance refers to the organizational policies, processes, and oversight mechanisms that ensure ethics, transparency, and accountability in AI system development and operation.

Shadow AI refers to the collective term for AI tools and services used by employees in their work without the approval of the company's IT department or management. It carries risks of information leakage and compliance violations.


What is AI Governance? A Practical Guide from EU AI Act Compliance to Internal Policy Development