Fuzzing is a testing technique that automatically feeds large amounts of random or semi-structured input data to a target program, detecting software vulnerabilities and bugs by triggering crashes or unexpected behavior.
Fuzzing (fuzz testing) is a technique that automatically generates large volumes of abnormal inputs, feeds them into a target program one after another, and observes how it "breaks." Inputs include random byte sequences, extremely long strings, values near boundary conditions, and malformed files. If a program crashes or hangs, there is a high likelihood that a potential vulnerability lurks there.
There are two broad categories of approaches. Mutation-based fuzzing takes valid inputs and modifies them incrementally to produce abnormal inputs. While easy to set up, it struggles to reach deep processing paths because it does not understand the structure of the input. Generation-based fuzzing defines the grammar or protocol specification of the input and generates data that is syntactically correct but semantically invalid. For structured inputs such as HTTP requests or image files, the latter approach is more efficient.
The dominant approach in modern fuzzing tools is coverage-guided fuzzing. AFL (American Fuzzy Lop), released by Google for OSS, significantly advanced this field. The program is instrumented to track which code paths a given input traverses. Inputs that discover new code paths are deemed "interesting" and are mutated further before being fed back in. This evolutionary feedback loop enables efficient discovery of bugs in deep processing logic that random inputs alone could never reach.
Google's OSS-Fuzz project continuously runs coverage-guided fuzzing against more than 1,000 OSS projects and has discovered over 10,000 vulnerabilities to date. There is also a growing movement to integrate fuzzing into DevSecOps CI pipelines.
Fuzzing is powerful, but not a silver bullet. The case in which Claude Mythos discovered a 16-year-old bug in FFmpeg that had slipped past 5 million automated tests illustrates the existence of areas where fuzzing is structurally ill-suited. Fuzzing is a technique that probes the "input→output" boundary, making it difficult to detect logic vulnerabilities that span multiple components or bugs that only manifest in specific combinations of state transitions.
AI-driven code analysis, such as Project Glasswing, reads source code with full context and finds vulnerabilities at a different level of abstraction than fuzzing. The two are not competing approaches but complementary ones. In shift-left practice, integrating fuzzing into CI while periodically conducting deep AI-driven analysis is becoming the realistic approach of choice.



A2A (Agent-to-Agent Protocol) is a communication protocol that enables different AI agents to perform capability discovery, task delegation, and state synchronization, published by Google in April 2025.

Acceptance testing is a testing method that verifies whether developed features meet business requirements and user stories, from the perspective of the product owner and stakeholders.

AES-256 is the highest-strength encryption algorithm using a 256-bit key length within AES (Advanced Encryption Standard), a symmetric-key cryptographic scheme standardized by the National Institute of Standards and Technology (NIST).

A mechanism that controls task distribution, state management, and coordination flows among multiple AI agents.

Agent Skills are reusable instruction sets defined to enable AI agents to perform specific tasks or areas of expertise, functioning as modular units that extend the capabilities of an agent.