E2E testing (End-to-End testing) is a testing methodology that simulates user interactions to drive requests through the entire system via a browser or API, verifying that the expected results are produced.
E2E tests drive an application from the "outside." They automate a browser to fill in forms, click buttons, and verify everything end-to-end — from screen transitions to data persistence. Tools like Playwright and Cypress control a headless browser to reproduce human interactions through scripts.
Databases and API servers that were replaced with mocks in unit tests and functional tests run as real instances in E2E tests. Because authentication flows, permission checks, and data reads and writes all pass through actual infrastructure, integration-level defects can be detected.
Execution time can be hundreds of times longer than unit tests, due to the combined overhead of browser startup, page rendering, and network communication. In addition, minor UI changes tend to break tests easily (the flakiness problem). For this reason, a practical operational approach is to limit E2E tests to critical user flows and keep the total count in the range of tens to a few hundred.
In the test pyramid model, E2E tests sit at the apex and are kept to a minimum. A stable configuration covers the majority of logic with unit tests at the base, addresses integration points with functional tests in the middle layer, and reserves E2E tests solely for critical paths.


Acceptance testing is a testing method that verifies whether developed features meet business requirements and user stories, from the perspective of the product owner and stakeholders.

Functional testing (feature testing) is a testing method that verifies system behavior in terms of specific features or use cases. It covers a broader scope than unit testing, confirming that multiple modules work together correctly.

Unit testing is a testing method that individually verifies the smallest units of a program, such as functions and methods. By replacing external dependencies with mocks, it allows for rapid validation of the target logic in isolation.


Closing the "Invisible Attack Vector" in AI Chat — An Implementation Guide to Preventing Prompt Injection via DB

Context Engineering is a technical discipline focused on systematically designing and optimizing the context provided to AI models — including codebase structure, commit history, design intent, and domain knowledge.