On April 7, Anthropic announced that its latest version of the large language model (LLM) Claude, dubbed Mythos, was here and displaying a shocking ability to find and exploit software vulnerabilities at machine, even industrialized speed. The implications of an AI red teamer on the loose, accessible potentially to threat actors, and able to be turned against any system in the world in an instant, has inspired alarm for governments and around the cybersecurity sector.
According to Anthropic, the Claude Mythos model can find and exploit zero-day bugs in “every major operating system and every major Web browser.” To prove the point, the company said the model was quickly able to identify a 27-year-old flaw in OpenBSD.
Enter Project Glasswing: A consortium of some of the biggest software providers in the world who will endeavor to use the model for cybersecurity defense first, putting it to work on their software before adversaries can get a hold of the tool.
Three reporters, Dark Reading’s Becky Bracken, Cybersecurity Dive’s Eric Geller, and TechTarget SearchSecurity’s Phil Sweeney, open up their notebooks and share what their top sources are saying in reaction to reports that Anthropic’s Mythos can find and exploit vulnerabilities at machine speed. They also cover the consortium of software power players that have come together to test Mythos under Project Glasswing.
Learn more in the video, and also check out our Reporters’ Notebook full series, which is designed to bring together insights and coverage from across Informa TechTarget’s network of cybersecurity sister sites.
Becky Bracken, Phil Sweeney & Eric Geller: Full Video Transcript
This transcript has been edited for clarity and length. For the full experience, please watch the video.
Dark Reading’s Becky Bracken: Hello everybody, and welcome to Reporters’ Notebook. I am Becky Bracken and I am here with my two colleagues to discuss this month’s big blockbuster story, “Mythos, the AI Model to End All Cybersecurity,” and Glasswing, the forum that was established to wrap industry and government’s head around it. I’m joined today by Eric Geller, senior reporter with Cybersecurity Dive, as well as Phil Sweeney, who is a reporter with TechTarget SearchSecurity. I’m sorry, we’ve rebranded, is that correct?
TechTarget SearchSecurity’s Phil Sweeney: You got it.
DR’s Becky Bracken: All right, well, welcome both of you. I figured this was a pretty easy one for us to tackle. Do you wanna walk us through the background as you understand it?
TTSS’s Phil Sweeney: For the Mythos preview, Anthropic developed it and had some pretty startling success with it, things they did not expect. And before release, they said, OK, we can’t do this. We can’t release this. We need to talk about this and the implications for that, especially security-wise. They found incredible volumes of zero-days, unknown vulnerabilities, and some of them going back years; they said many, in fact, are 10, 20 years old, not just a few outliers. There were many, many that were going back many years undiscovered and the LLM found them in almost no time at all. So it was quite a jolt and, as a result, Anthropic has reached out to partners across the IT industry to try to come to some kind of consensus about, what are we going to do about this before this becomes major security crisis?
DR’s Becky Bracken: Eric, what’s the headline for you here?
Cybersecurity Dive’s Eric Geller: To me, this is a story about how the government is going to be increasingly dependent on the technology companies in a way that wasn’t even really true in earlier phases of this kind of government-industry relationship. We think about cybersecurity as a domain where the private sector, because it runs the infrastructure, has the best visibility; and the government is really dependent on it to understand cyberattacks. I think in the AI space, that reliance is even stronger because now it’s not just that the AI companies have all this information about how hackers are trying to launch cyberattacks using their products. And you see Anthropic putting out that report last year about the first AI-powered cyberattack. So, they have that visibility. They also have the ability, unlike say critical infrastructure operators, to actually define the terms of the battlefield, because it’s their products that are being used to do some of this work.
That’s not to say AI is the only thing that hackers are using or the only thing that they need, but it is increasingly going to be part of the initial phase of an attack to use AI to figure out if your target has any vulnerabilities. And so it’s incumbent on the vendors to do as much as they can to prevent their tools from being weaponized in a way that really isn’t true with a lot of other technology out there (with the exception of pen-testing software where we know that hackers use things like Mimikatz for attacks). This is a totally different ballgame and the government is entirely dependent on the vendors to not only make products that are not capable of being weaponized, but also to proactively share with the government what they’re finding and what their partners are finding with these tools.
You know, we’re going to talk about Project Glasswing, and what I’ll be looking for there is, as companies use Mythos and discover vulnerabilities, what is the tempo of information-sharing with federal agencies like CISA? Is there something formally in place that says, when a Glasswing partner finds a vulnerability, does it have to tell CISA? I don’t think so. So we’re really seeing an environment where these relationships haven’t been well defined, and how quickly that stuff gets ironed out is gonna go a long way toward answering the question of how rocky are the next few years are going to be, to prevent weaponization of these tools.
DR’s Becky Bracken: You are, Eric, the person that I look to to read the Washington, D.C., tea leaves about what’s going on in cyber. So, what’s your analysis of where we are? The executive branch has been very clear that they want AI to run rampant and do nothing to hamper any kind of innovation. How do you see this playing out in, let’s say the next six months? I think that’s a pretty long runway in the AI.
CD’s Eric Geller: I mean, I do think that there’s no real appetite in Washington to regulate what a company like Anthropic can do, in part because how would you define the boundaries of the regulation? How would you define safe behavior and unsafe behavior, safe coding and unsafe coding? I mean, if you define it based on the output, i.e., can this tool help a hacker find a vulnerability, then you’re going to be prohibiting a lot of behavior that we actually want to see because any tool that can help a hacker find a vulnerability can also help a defender find a vulnerability — the technology is agnostic. There’s no way to create an AI model that checks who you are, peers into your soul and based on that decides whether it’s going to tell you about a CVE in an Internet-facing network appliance or what have you.
That would be what we would want in a fantasy world, but that doesn’t exist. So, you can’t regulate the problem out of existence. That’s not to say you can’t have any regulation. I’m not taking a stance here, but the idea that you can solve this particular problem through a regulatory framework, it’s not like environmental pollution. You can’t say only do the good things and don’t do the bad things. That’s not how the technology works.
And I think you see policymakers recognizing that. In the absence of a regulatory answer, the next best option is close conversations and collaboration so that as Anthropic is finding out that its product can do something potentially dangerous, they’re telling the government, and the government is deciding whether it ought to warn critical infrastructure operators.
It does at least speak to this idea of, OK, harm is gonna happen. The best thing we can do to get ahead of it is talk to each other as we’re learning about the harm. And that’s not a satisfying answer, but I think it’s kind of the best that Washington has at this point.
DR’s Becky Bracken: A healthy answer. People talking is not happening just here and there, hither and thither right now. So, to see it happening here is important. But also there is a real dearth of talent and expertise right now in government; the experts that do exist are working in private-sector businesses right now. Would you agree with that?
CD’s Eric Geller: Yeah, especially with all the layoffs that we’ve seen recently. And I’m going to be looking to see what happens with NIST, the National Institute of Standards and Technology. They have an AI Safety Institute that was created in the last administration. It’s been refocused now to really look at these core technical issues of AI models and the double-edged sword of this technology. So, I’m going to be very curious to see if that agency gets more involved in working side by side with the vendors to understand the implications of their product.
DR’s Becky Bracken: And on the expertise front, Phil, enter Project Glasswing. And so, this is the roundtable at which this conversation that Eric’s been referring to is happening, correct? Tell us a little bit about what it is and what its parameters are.
TTSS’s Phil Sweeney: Right, right. It is a group of 12 companies or organizations involved at the point of the spear. Forty or so others are going to be involved in other ways. But yeah, the big ones. You’re talking about your cloud providers: AWS is involved here, Google is involved here. Microsoft. Anthropic itself, Apple, Cisco, CrowdStrike, JP Morgan Chase. It’s a dozen big powerful players in IT and finance and security and name it. So, they are getting access to the preview before any kind of release publicly. The idea being to give some sort of head start on fixing these vulnerabilities [before they’re weaponized].
It’s an unusual level of cooperation. Rival companies will sometimes cooperate on some cybersecurity standards, interoperability, that sort of thing. There is the Linux Foundation, the Cloud Native Computing Foundation. They have cooperative relationships across industries. There’s a boldness here, an urgency that feels different, and it’s coordination on a scale that is rarely seen. So, among rivals, some bitter rivals in case, competitors, they’re saying this can’t be fixed in Washington. It can’t be fixed by individual companies. There has to be some sort of collective action. CrowdStrike’s CTO said something to the effect of this needs to happen for defenders to unify, to put these capabilities to work now before the adversaries can become involved in a serious way. Someone from Cisco supporting Glasswing said that the work is just too important and too urgent to do it alone. So, there’s a sense here that this is a massive risk that’s going to require massive effort to address.
DR’s Becky Bracken: I wonder what you all make of this notion that this might be a bit overhyped. It’s not lost on me that it’s called “Mythos.” It’s not lost on me that a lot of this is very secret squirrel; it’s really big, but you can’t see it. The AI Security Institute in the UK did a technical evaluation on Mythos that found that maybe it’s not as potent of a tool as they’re making it out to be. A lot of the criticism was that what they ran it against wasn’t particularly well-defended, really not as well-defended as even a mid-size organization would be. I wonder what you all make of this idea that maybe this is overhyped or that people are falling for what’s essentially a marketing scheme, because I have heard that.
CD’s Eric Geller: Well, I think partly it is true that the way you defend yourselves from the kinds of attacks that this tool can find is the same as the way you defend yourself from an attack that a human discovers and weaponizes and launches. Really what we’re talking about here is not the kind of attack that gets launched. Not for the most part anyway. It’s the democratization of being able to do that work.
If you’re using default passwords, if you have a network appliance that’s got out-of-date firmware, a human can exploit that if they know how to do so. But it’s something that AI is making it easier to do. So, it’s not as if AI has created new forms of attack. It’s made it easier for more kinds of people with less knowledge to launch those attacks.
You still need to be doing the same kinds of things you were doing in the past in terms of verifying your network perimeter, checking to make sure your user accounts are not being abused, employ “identity as the perimeter,” all these buzzwords that we know about from going to conferences over the years. These are still the things you need to do … you need strong passwords just as you always have.
DR’s Becky Bracken: Strong security hygiene, all the things.
CD’s Eric Geller: Absolutely, you need to do the same things you’ve always needed to do. It’s just that now you have to worry about more people trying to exploit your failure to do those things.
DR’s Becky Bracken: That’s a great point. Phil, did you have anything to add to that?
TTSS’s Phil Sweeney: Just to add to that, yeah, there is, I think, a range of opinion and thought here. I think because this is somewhat unprecedented, you can’t look at previous examples and say, this is just like that. So, there’s going to be some optimism, some skepticism, some cynicism even. I get that. But what I would add is that if we take Anthropic at its word, it has said that they had engineers with no formal security training just work with Mythos Preview and say, find remote code-execution (RCE) vulnerabilities, and then boom, the next morning, there would be a complete working exploit right there waiting for them. So, it does certainly lower the bar for sophistication. This can find and also link together vulnerabilities and chain them in a way that usually requires a lot of expertise, if what Anthropic is saying is possible. That certainly changes and makes cybercrime a pretty low bar for entry.
DR’s Becky Bracken: Another smart point. One of the nuggets out of this was the timing; there was an acknowledgement that the model could do some pretty amazing things, and then a major hack of Chinese data. There was maybe more twittering than hard reporting that that maybe the two were linked, but here is a tool of unprecedented danger that falls in the laps of the American government. And the next thing you know, the Chinese are getting their data swiped in a huge, big way. Are you hearing that there’s any connection, Eric?
CD’s Eric Geller: I’d be very interested to learn more about that situation. I don’t think at this point we have any reason to think it’s connected to this tool. And in part because, think about that Chinese organization as a target of nation-state espionage. If you think about the range of organizations that want to hack into that entity, it includes the best hackers in the world. So, the idea that somebody could break into that organization, you don’t need the advent of Claude Mythos to explain that. If that had happened a year ago, two years ago, I would not have been surprised, because the people trying to get in are the best in the world. So, I do think the timing is just a matter of coincidence because you don’t need Claude Mythos to get in there if you are the typical group trying to get in, which is NSA, CIA, British intelligence. And I doubt that they are relying on Claude Mythos to do their attacks.
DR’s Becky Bracken: OK, that is a more reasonable take. And so, Phil, what are some of the questions that you are hoping your reporting will be able to answer about this moving forward?
TTSS’s Phil Sweeney: Right. I think it’ll be interesting to see how the typical security organization, CISOs and their teams, how they respond to this, how they react. If they’re not among the special invitees for this endeavor, what do they do to prepare and guess how all this branches out and spreads throughout the security ecosystem. There was something interesting that came out from the Cloud Security Alliance just the other day in response to all this. They wanted to give CISOs and boards and executives something to something to hang onto and say, OK, this is how you should be thinking about it, even if you’re not directly involved. This is going to change your life in a significant way, perhaps. They had some thoughts about sounding the alarm and being ready; they said, prepare now, ask for more budget, hire more people, do more automation, because there will be a very shortened window between when a vulnerability is disclosed and the time when it can be exploited; and security teams are gonna have to be ready to step in and act quickly.
DR’s Becky Bracken: I heard that at RSA quite a bit, “attack at machine speed.” And to me, that is the biggest question that teams are going to have to answer is the patching problem. They are going to have to get patching done at machine speed. And I think there are still a lot of questions about how that is going to happen, and it needs to happen yesterday. And I think practitioners are pretty well aware of that fact. It’s just a matter of catching up to reality.
Eric, I want to give you the final word here. What questions are you looking to get answered in your reporting?
CD’s Eric Geller: I’m very curious to see if this changes how the government thinks about its role in overseeing the sprawl of AI technology. President Biden tried to get these companies to report to the government when they were doing red-teaming tests and basically provide the results of those tests so that the government can, in real time, understand what’s happening with the security audits that the companies are doing. President Trump got rid of that requirement. He described it as anti-innovation and too onerous and burdensome.
But I’ll be interested to see if the Trump administration rethinks the hands-off approach it’s taken to AI. I don’t think it’s going to completely rethink it, but I think there might be some folks advocating for a little bit more looking over the shoulder of some of these AI companies. Not with regulation, but with just some degree of oversight and input.
DR’s Becky Bracken: Where would that come from? I mean, we’re looking at the Incredible Shrinking CISA. You know, they’re standing up a State Department quasi-cyber wing. There’s NSA. Where is this sort of thought leadership shift going to come from?
CD’s Eric Geller: Well, don’t think there’s a lot of appetite right now for it from anywhere, but there are some agencies that would be a natural fit to kind of have these kinds of interactions with the AI companies. NIST is the one that comes to mind because it’s not regulatory. So, if you have the companies provide their reports to NIST, they’re not worried that NIST is going to prosecute them or file a civil case. It’s not like the FTC or the Justice Department where if you tell them about something, they might look at it and say, you know what? You violated the law here, we’re going to take you to court. That’s not going to happen if you go to NIST, because that’s not the culture of the agency. So, I think it would be a good fit if they wanted to bring something like the Biden executive order back into force. But again, I emphasize, I don’t think there’s a lot of appetite anywhere in the government for doing exactly what President Biden had in mind.
DR’s Becky Bracken: Makes sense. Well, gentlemen, I’ve learned a lot today. Thank you so much for helping me understand this topic better and helping our audience understand as well. Eric, where can we find more of your thoughtful, deep reporting on this and other topics?
CD’s Eric Geller: You can just go to cybersecuritydive.com.
DR’s Becky Bracken: And Phil, tell us where we can find you.
TTSS’s Phil Sweeney: I’m at techtarget.com/searchsecurity.
DR’s Becky Bracken: My name is Becky Bracken. I am a senior editor with Dark Reading. You can find this along with every other sort of podcast and video and, of course, our deep thorough reporting at darkreading.com. Thank you all for listening. This has been another episode of Reporters’ Notebook. We’ll see you next time.

