
AI Red Teaming (AI Red Teaming)
An evaluation method that systematically tests AI system vulnerabilities from an attacker's perspective to proactively identify safety risks.
Clear explanations of AI, DX, and technology terminology

An evaluation method that systematically tests AI system vulnerabilities from an attacker's perspective to proactively identify safety risks.

A prompting technique that improves accuracy on complex tasks by having the LLM explicitly generate intermediate reasoning steps.

Context Engineering is a technical discipline focused on systematically designing and optimizing the context provided to AI models — including codebase structure, commit history, design intent, and domain knowledge.


A technique that cross-references LLM outputs with external data sources and search results to generate factually grounded responses. A core method for reducing hallucinations.