Anthropic has announced a new security-focused initiative called Project Glasswing, designed to use its upcoming AI model to help companies detect and fix vulnerabilities in software systems.
What Project Glasswing is
Project Glasswing is positioned as a framework that allows organizations to apply Anthropic’s new model—Mythos Preview—to:
-
Identify security vulnerabilities in operating systems
-
Scan for weaknesses in web browsers
-
Assist in patching and remediation workflows
-
Support broader “secure-by-design” software development
Anthropic
Mythos Preview
Core idea
The goal is to shift AI use in security from passive analysis to more active assistance, where the model can:
-
Simulate attacker behavior (to find exploits before real attackers do)
-
Analyze code paths for hidden vulnerabilities
-
Help engineers prioritize fixes based on real-world risk
Why it matters
If widely adopted, tools like Project Glasswing could:
-
Speed up vulnerability discovery in large codebases
-
Reduce reliance on manual security audits
-
Improve browser and OS patch cycles, which are often slow and complex
At the same time, systems like this also raise concerns about:
-
False positives/negatives in security analysis
-
Over-reliance on AI for critical security decisions
-
Potential misuse if the same capabilities are reverse-engineered
If Anthropic’s claims and rollout details are accurate, this is a fairly significant shift in how high-end AI security tooling is being positioned.
Here’s a clean breakdown of what’s being described:
Mythos Preview + Project Glasswing (what it is)
Anthropic is presenting Mythos Preview as a highly advanced AI model focused on cybersecurity, deployed through a program called Project Glasswing.
The key claim is that the model can:
-
Discover zero-day vulnerabilities (previously unknown security flaws)
-
Work across major operating systems and browsers
-
Assist in both detection and exploitation modeling (for defensive analysis)
The “thousands of zero-days” claim
Anthropic says Mythos Preview has already identified:
-
Thousands of zero-day vulnerabilities
-
Issues across all major OS and browser ecosystems
That’s a bold claim, and in practice it would imply the model is operating at or beyond elite human security researcher level in certain domains.
Strategic framing from Anthropic
The company is explicitly framing this as a defensive race:
AI is now strong enough that it could outperform most human experts at vulnerability discovery and exploitation.
Their argument is:
-
These capabilities will eventually become widespread
-
So they should be used early by trusted organizations for defense
Limited-access rollout (not public)
Mythos Preview is not open to the public. Instead, access is restricted to select partners with heavy security needs.
Initial partners include:
-
Apple
-
Amazon Web Services
-
Microsoft
-
Google
-
NVIDIA
-
Cisco
-
CrowdStrike
-
Palo Alto Networks
-
Broadcom
-
JPMorganChase
-
Linux Foundation
What this means for Apple specifically
Apple is expected to use Mythos Preview in defensive roles such as:
-
Finding vulnerabilities in iOS, iPadOS, macOS, watchOS, tvOS, visionOS
-
Strengthening Safari
-
Possibly preparing for future platforms like rumored homeOS
The emphasis is important: this is described as defensive security use, not product-facing AI features.
Cost and control
-
Anthropic is offering up to $100 million in usage credits
-
After that, partners pay for continued access
-
This suggests long-term commercialization of high-end AI security tooling
Big picture implication
If even partially true, this signals:
-
A shift from manual security research → AI-assisted vulnerability discovery at scale
-
Faster patch cycles across major platforms
-
But also a new arms race, since the same capability class could be dual-use (defense and offense)
If you want, I can explain how this would realistically fit into Apple’s current security stack (XProtect, Rapid Security Response, etc.) and where AI would actually plug in.
