The Day Cybersecurity Changed Forever

What Anthropic’s Project Glasswing Means – and What Canadian Organizations Must Do Right Now

By Andrew Buckles, EVP, Services, ISA Cybersecurity

For those close to me, this won’t come as a surprise: I think about AI every single day. The novelty of it fascinates me. Not just because of what it can do today, but because of where it could be headed tomorrow, both the extraordinarily good and (hopefully not) the terrifyingly bad. We are watching something emerge that has never existed before and I’m so curious, I can’t look away.

If I’m honest, this curiosity started long before AI. Even as a kid, I was the one asking too many questions; sometimes to the annoyance of others. I’ve always been drawn to things I didn’t understand, to break them apart, learn how they work, and put them back together (most of the time!). That curiosity never left – and AI has thrown fuel on it.

But what makes AI different from other things I’ve been curious about is that it connects two ideas I’ve held since university. First, that Moore’s law is exponential while government operates linearly – and those lines inevitably diverge. Second, that a fundamental function of government is to regulate away the negative implications of technology, which by its nature is dual use. That job gets harder every year, and AI is accelerating the problem faster than anything before it.

Reading the research, listening to the people actually building these systems, following the debates – this has been the most exciting stretch of my career. It doesn’t feel like work; it feels like something I’d be paying close attention to, no matter what industry I’m in. AI is central to my work, to the direction I set for our teams, and to the advice our teams give our clients. But when Anthropic’s Project Glasswing announcement landed on April 7, my first thought was: you’ll remember where you were when you first heard this news. Not because AI took another leap forward – but because the cybersecurity landscape just shifted under all of our feet, forever.

What is Mythos Preview? What is Project Glasswing?

On April 7, 2026, Anthropic announced that its unreleased frontier model – Claude Mythos Preview – had autonomously discovered thousands of high-severity zero-day vulnerabilities in critical software worldwide, including flaws in every major operating system, every major web browser, and a range of other widely used software. Some of these flaws had survived decades of human code review and millions of automated security tests. In one case, Claude Mythos Preview found a 27-year-old vulnerability in OpenBSD – a system specifically engineered for security hardening – that allowed an attacker to remotely crash any machine just by connecting to it. In another, it identified a 16-year-old flaw in FFmpeg that automated testing tools had executed five million times without catching. As Nicholas Carlini, a researcher working with Anthropic, noted, he had found more bugs with this model in a few weeks than in the rest of his life combined. Anthropic deemed the model’s capabilities so dangerous that it chose not to release Claude Mythos Preview publicly – a nearly unprecedented move in the AI industry.

Instead, the company launched Project Glasswing: a coalition of twelve major technology organizations – including Microsoft, Apple, AWS, Google, CrowdStrike, Cisco, and Palo Alto Networks – along with over 40 additional organizations that maintain critical software infrastructure. Anthropic committed up to $100 million (all figures USD) in model usage credits across the initiative, plus $4 million in direct donations to open-source security organizations, to give defenders a head start before these capabilities inevitably reach hostile actors.

“The dangers of getting this wrong are obvious, but if we get it right, there is a real opportunity to create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities.”

 

Dario Amodei, CEO, Anthropic (April 7, 2026)

Why This Is a Defining Moment

Let me be direct about what this means. Anthropic has effectively demonstrated a master key – a capability that can find a way into virtually any digital system, including the most secure software on earth. This makes anyone with access to a Mythos-class model incredibly powerful – arguably more powerful than any single nation-state, given how digitally interconnected the global economy has become. Even if you deployed every best-in-class cybersecurity tool available today, it wouldn’t matter, because these capabilities can find ways around them. The cost, time, and expertise required to discover and weaponize sophisticated, nation-state-calibre exploits will effectively collapse. That means every organization now needs a security program with significant defence in depth, with AI on top of it. Not eventually. Now.

The Uncomfortable Truth About Project Glasswing

Some of the messaging surrounding Project Glasswing has framed the initiative around the notion that you cannot have AI without security. That is not wrong, but it misses the deeper point. These twelve companies weren’t selected at random: they were selected because their software constitutes the digital infrastructure that billions of people depend on every day. Operating systems, browsers, cloud platforms, network hardware, financial systems, cybersecurity software – this is the foundation of the connected world. Anthropic gave them early access for a very specific reason: their footprint is so vast, flaws in their software could have devastating consequences on a global scale if exploited before being patched.

That is not a criticism of those companies. They employ some of the most talented security teams on the planet. But the fact that even those teams – with billions of dollars in resources – can’t find what Claude Mythos Preview can tells us something profound about the limits of human-scale defence and how a major shift is happening. If the most heavily fortified software in the world is exposed, the rest of us need to take a very hard look at our own programs.

“The window between a vulnerability being discovered and being exploited by an adversary has collapsed – what once took months now happens in minutes with AI.”

— Elia Zaitsev, CTO, CrowdStrike (April 7, 2026)

We Are All Low-Hanging Fruit Now

For years, organizations with solid security programs could take some comfort in the knowledge that sophisticated threat actors would generally bypass a well-defended target in favour of easier prey. Good cyber hygiene – patching, access controls, training, compliance, monitoring – made you a harder target, and harder targets got skipped. Smaller organizations could reasonably assume they weren’t worth the effort, and not every organization faced sophisticated, well-financed nation-state attention. That calculus has fundamentally changed. AI does not get discouraged; it does not sleep; it does not take coffee breaks. When it can autonomously scan millions of lines of code, chain together three, four, or five individually minor vulnerabilities into a sophisticated exploit, and do so in minutes rather than months – there is no such thing as an unattractive target. A strong security foundation remains absolutely necessary, of course. But it is no longer sufficient on its own. At machine speed, everything is low-hanging fruit. And for organizations that have relied on security by obscurity – non-standard ports, custom code, unusual configurations designed to confuse attackers – that too is now obsolete. AI will methodically work through every obfuscation until it finds what it is looking for. Whether your defence relied on strong fundamentals or on being hard to find, both strategies now need AI-powered defence on top of them.

The Window Is Closing

Whether it is Anthropic, another AI company, or a nation-state actor, capabilities at the Claude Mythos Preview level will become mainstream. As Anthropic CEO Dario Amodei stated on April 7, “more powerful models are going to come from us and from others.” Anthropic’s own Frontier Red Team Cyber Lead, Newton Cheng, told VentureBeat that frontier AI capabilities are likely to advance substantially over just the next few months. I believe we have roughly six months before these capabilities proliferate broadly. If you are a defender, the time to act is right now – not next quarter, not next fiscal year. Threat scenarios informed by these capabilities should be the first input to reassessing risk. Every organization needs to re-evaluate their risk posture through this lens immediately, before that window closes.

What AI Defence Actually Looks Like

I believe there is a significant misconception circulating about what it means to use AI defensively. It is not necessarily about pitting one model against another model – some kind of AI-versus-AI cage match. That framing fundamentally misunderstands the problem. Effective AI-powered defence requires two things working together. First, you need a mature foundation of controls – people, processes, technology, key performance indicators, and assurance mechanisms over those controls. Second, you need to leverage agentic AI to operate those controls, with human-in-the-loop processes where necessary, creating a Security Operations Centre (SOC) that functions at the speed and scale the threat environment will soon demand. It’s that simple. Mature your controls, develop AI systems to operate them, train your people to operate and tune the AI systems.

That combination – mature controls, AI-augmented operations, human oversight – is what I call the Agentic SOC. And here is the part that should give defenders real optimism: the Agentic SOC democratizes cybersecurity. Historically, world-class security operations have been the exclusive domain of large enterprises with the largest budgets. The Agentic SOC changes that equation. When AI handles the heavy lifting of continuous monitoring, correlation, and response at machine speed, organizations of every size gain access to a calibre of defence that was previously out of reach. In a world where every organization is an equal target, that’s essential.

“The Agentic SOC is how we level the playing field. In a post-Mythos world, top-tier cybersecurity defence cannot be reserved for organizations with the biggest budgets – it has to be accessible to everyone.”

— Andrew Buckles, EVP, Services, ISA Cybersecurity

What Happens If Organizations Don’t Act

Organizations that fail to adapt to this higher level of risk face potentially catastrophic consequences. That might sound like hyperbole, but think it over: you’ll find it’s actually a practical risk assessment. When sophisticated exploits can be discovered and weaponized at near-zero cost, an immature security program is beyond just a liability; it is an existential risk to the business. There is another saying that comes to mind with the clock now ticking: nine mothers can’t make a baby in a month. The same goes for security program maturity and AI deployments. Act now – because the solution won’t just be deploying one tool to fight another. The answer will be your people, empowered by AI, armed with playbooks for every possible scenario and backed by mature controls and response actions measured in minutes.

Where ISA Cybersecurity Fits

At ISA Cybersecurity, we have spent over three decades helping Canadian organizations build and operate security programs that evolve with the threat landscape. We run a 24/7/365 SOC 2 Type 2-compliant Security Operations Centre (SOC), and I lead a dedicated AI team focused on integrating AI capabilities for our clients, safely and securely. Through our AI 360 and Cyber 360 offerings, we address governance, risk assessment, managed detection, incident response, and more. The world changed on April 7. We are ready to help you change with it.

NEWSLETTER

Get exclusively curated cyber insights and news in your inbox

Contact Us Today

SUBSCRIBE

Get monthly proprietary, curated updates on the latest cyber news.