AI Cybersecurity Model Finds Hundreds of Vulnerabilities in Firefox

A new artificial intelligence system developed by Anthropic has reportedly uncovered hundreds of security issues in the Mozilla Firefox browser, highlighting the growing role of AI in cybersecurity.

According to Mozilla, the model—called Claude Mythos Preview—identified a total of 271 potential vulnerabilities. All of these issues were addressed before the release of Firefox 150, which is scheduled to roll out this week.

Security Fixes and CVEs

While the AI system flagged hundreds of potential weaknesses, the official update for Firefox 150 includes fixes for more than 40 documented CVEs (Common Vulnerabilities and Exposures).

Only three of those—CVE-2026-6746, CVE-2026-6757, and CVE-2026-6758—were directly attributed to the AI findings in the official disclosure.

This gap suggests that many of the identified issues were considered lower risk or related to edge-case scenarios, such as internal safeguards or non-exploitable code paths.

Human vs AI Discovery

Bobby Holley, Chief Technology Officer for Firefox, noted that the results were encouraging but not entirely surprising.

He explained that none of the vulnerabilities discovered by the AI appeared to be beyond what highly skilled human researchers could have identified. While some experts speculate that future AI systems may uncover entirely new classes of security flaws, Mozilla does not currently see evidence of that happening.

Limited Access to the Technology

Due to the advanced capabilities of the model, Anthropic has chosen not to release it publicly. Instead, access is restricted to a select group of major organizations through an initiative known as Project Glasswing.

Participants reportedly include companies such as Amazon Web Services, Apple, Google, Microsoft, Nvidia, and Linux Foundation, among others.

Faster and More Complex Security Testing

Early testing results shared by Palo Alto Networks suggest that the AI system dramatically accelerates vulnerability discovery.

According to the company, the model was able to perform the equivalent of a full year of penetration testing in less than three weeks.

Additionally, the AI demonstrated the ability to combine multiple low- and medium-risk issues into more serious exploit scenarios—something traditional tools often fail to detect.

Growing Concerns About AI in Cybersecurity

Industry experts believe that tools like this could soon become widespread.

Lee Klarich, Chief Product and Technology Officer at Palo Alto Networks, warned that organizations that fail to adapt may face entirely new categories of risk, especially as AI-driven threats evolve.

He also pointed out that other companies are likely to develop similar systems, which may not be as tightly controlled. There are already reports suggesting that unauthorized access to such tools may be occurring.


The emergence of AI-powered cybersecurity tools marks a significant shift in how vulnerabilities are discovered and managed. While current systems still operate within the boundaries of human expertise, their speed and scale could reshape digital security in the near future.