What Project Glasswing Means for Companies That Aren't In It
This week Anthropic announced Project Glasswing, a coalition of roughly fifty organizations getting early access to a frontier model purpose-built to find vulnerabilities at machine speed. The partner list reads like a Who's Who of hyperscalers and platform vendors: AWS, Apple, Google, Microsoft, CrowdStrike, and a handful of others. The stated investment is around one hundred million dollars. The early reporting describes autonomous zero-day discovery against real codebases, with the coalition members already remediating findings before disclosure.
If you are a CISO at a mid-market company, my phone has probably been ringing for the same reason yours has. The question on every call sounds the same. We are not in that group of fifty. What does this mean for us?
Here is the honest answer.
To be clear up front: CYBER AI SECURITY is not in the Glasswing coalition. We do not have access to the model. Anyone telling you they do, outside the named partners, is either confused or selling something. Everything in this post is built on what Anthropic has stated publicly and what we already know about how capabilities like this propagate.
The asymmetry the announcement creates
Glasswing is structured as a closed program for a reason. The capability is significant enough that Anthropic and its partners want to find and patch the worst issues before the techniques are public. That is the right call. It is also a temporary state. Capability of this kind does not stay in fifty organizations forever. Research papers get written. Methods get described at conferences. Open-weight models continue to improve. Adversaries with resources will replicate enough of this to matter, and they will not announce when they do.
What the announcement does, in practice, is split the world into two groups for the next twelve to twenty-four months.
The fifty organizations inside the coalition will get measurably more secure. Their codebases are being audited by something that does not get tired, does not miss obvious classes of bug, and does not need a context window briefing every Monday morning. They are pre-patching the kinds of issues that historically sit in production for years before anyone notices.
Everyone else, including most of the mid-market, gets the opposite trajectory. The same techniques are being developed in adversarial labs that are not publishing, not coordinating disclosure, and not on anyone's friendly partner list. The gap between when a vulnerability becomes findable and when a vulnerability becomes exploitable is collapsing. That gap used to be measured in months. It is now measured in days, and for some classes of issue, hours.
What this actually changes for a security program
I want to be careful here, because this is the part where security marketing usually goes off the rails. Glasswing does not mean every CISO needs to panic-buy a new platform. It does not mean your existing controls suddenly stopped working. The fundamentals still matter and most organizations still have plenty of unfinished work on the fundamentals.
What it does change is the math on a few specific things.
Time to patch matters more than it used to. If an attacker can take a CVE and produce a working exploit in a day, then a thirty-day patch SLA is a thirty-day window of exposure. Vulnerability management programs that were tolerable last year are now the soft underbelly. The work is not glamorous. It is also the highest-leverage thing most teams can do this quarter.
Your AI deployments are part of the attack surface now, not later. Every chatbot, every RAG pipeline, every coding copilot, every internal agent with tool access is a target. The same capabilities that let a coalition find zero-days in production codebases will be turned on prompt injection, indirect prompt injection through documents, and credential exfiltration from copilot context. If you have not adversarially tested your AI surface, you do not know what is there.
Vendor reports need to be treated as inputs, not conclusions. When attacker tooling improves faster than defender tooling, the gap between what your MDR is detecting and what is actually happening in your environment widens. The right response is not to fire your vendor. It is to verify them, periodically, with someone who is not financially incentivized to find nothing.
Detection engineering needs to assume the attacker is automated. Playbooks built around the cadence of a human red teamer will not catch an attacker that pivots in seconds, cleans up in minutes, and never gets bored. The detections you write this year should be the ones that fire on behaviors, not on artifacts.
Practical recommendations for the next ninety days
Not everything needs to happen at once. If I were sitting in your seat, this is the order I would work in.
1. Inventory your AI surface honestly. Not the official inventory. The real one. Every Copilot license, every shadow LangChain script, every internal chatbot a product team stood up over a long weekend. You cannot defend what you do not know exists. If your last AI inventory is more than six months old, it is fiction.
2. Run adversarial testing against the AI systems that touch real data or take real actions. Not a vendor questionnaire. Actual prompt injection, actual data exfiltration attempts, actual tool-abuse scenarios. The bar is whether someone with hostile intent and a few hours could get your system to do something it should not. This is what we do for clients, and I will tell you plainly that the first round is almost always uncomfortable.
3. Tighten your patch cycle on internet-facing assets. Not every system. The ones that are reachable from the internet, that handle authentication, or that touch payment or PII. Get the time-to-patch under seven days for criticals. If you cannot, understand why and fix the why.
4. Verify your MDR or SOC vendor with an independent set of eyes. Pick a recent incident. Ask for the raw evidence and the methodology, not the executive summary. Have someone outside the vendor walk through whether the closure criteria were met. We have written about this elsewhere on the blog and I will not repeat the whole thing here, but it is the cheapest high-signal exercise a security program can run.
5. Plan for the disclosure cadence to change. When the Glasswing coalition starts publishing the classes of bug they are finding, your patch backlog is going to spike. Decide now who owns triage, how you will prioritize, and what your communication path looks like. Doing that planning under pressure is how mistakes get made.
6. Talk to your board about the asymmetry, plainly. Not as a budget request. As context. The board needs to understand that the threat environment is shifting in a way that favors organizations with frontier-model access, and that everyone else needs to compensate with discipline, verification, and faster cycles. That conversation lands better when it is not attached to a purchase order.
What we are doing about it
We built CYBER AI SECURITY for exactly this asymmetry. The bet is that one experienced operator, working with a focused agent squad, can give a company without coalition access the same kind of rigor that historically required a much larger team. We do AI security testing. We verify MDR vendors. We do AI-aware threat modeling. None of those things require coalition membership. They require somebody who has done the work, is willing to be honest about what they find, and is not selling you a platform.
If any of the recommendations above describes a gap you know you have, we are easy to reach. If they describe a gap you suspect you have but are not sure, that is also a conversation worth having. And if you read this post and conclude you have it covered, that is a perfectly good outcome too. The point of this post is not to sell you anything. It is to make sure you are not sitting at your desk next month wondering whether you should have done something this month.
The fifty organizations in Glasswing are going to be fine. The thousands of organizations that are not in it have a window of time, and a list of unglamorous work, and a choice about whether to start now or later. My strong recommendation is now.
Want a second opinion on your AI security posture or your MDR vendor?
We do honest, evidence-based assessments for security teams. No platform pitch, no fear-mongering. Just the work.
Request a Consultation