Sam Altman doesn’t think Anthropic’s Claude Mythos is all that frightening to the public. Instead, the OpenAI CEO believes it’s a marketing tactic meant to keep powerful AI systems locked up under tight control.
Speaking on the Core Memory podcast, hosted by tech journalist Ashlee Vance, Altman argued that Anthropic is using what he calls “fear-based marketing.” The idea, he said, is to make people scared enough that they hand over control to a select few. He pointed out that while some safety concerns are real—and he acknowledged there are definitely some—the pitch from Anthropic feels like a sales spiel. He put it this way: it’s like someone claiming they built a bomb, then offering to sell you a bomb shelter for $100 million. The catch is only a chosen few get to buy the shelter.
What Is Claude Mythos?
Anthropic’s Claude Mythos model has stirred up a lot of talk since it was revealed last month. Researchers, governments, and cybersecurity experts are paying close attention. Early tests showed it can find software bugs on its own and run complex cyber attacks. But the model isn’t available to just anyone. It’s part of a restricted program called Project Glasswing, which gives access only to certain organizations like Amazon, Apple, and Microsoft.
Anthropic frames Mythos as both a defensive tool—useful for spotting critical flaws quickly—and a potential weapon if it falls into the wrong hands. The company says by controlling access, they’re helping defenders catch up before the technology spreads. During testing, the model found hundreds of vulnerabilities in Mozilla’s Firefox browser. It also handled multi-stage attack simulations.
The Debate Over Control
Altman’s criticism points to a wider split in the AI industry. Some firms, like Anthropic, favor tightly controlled releases. Others, like OpenAI, argue for broader distribution so more people can study and improve the technology. Altman admitted it’s not always easy to balance new capabilities with the belief that AI should be widely accessible.
Security experts have warned that the same techniques Mythos uses to find vulnerabilities could be used to hack systems. The UK’s AI Security Institute confirmed the model can complete sophisticated cyber operations. But last week, a group of researchers claimed they reproduced Mythos’ results using public models, which raises questions about how unique it really is.
Despite concerns from some parts of the U.S. government—including warnings about its use in warfare—the National Security Agency has started testing a preview version on classified networks. On the prediction market Myriad, users give a 49% chance that Claude Mythos will be released to the public by June 30.
Altman seems to think we’ll hear a lot more talk about dangerous models as time goes on. But he urges caution. “There will be a lot more rhetoric about models that are too dangerous to release,” he said. “There will also be very dangerous models that will have to be released in different ways.” He added that Mythos is probably great for cybersecurity, but OpenAI has its own plan for safely releasing similar capabilities.
He also pushed back on rumors that OpenAI is cutting back on expensive data centers. “People really want to write the story of pulling back,” Altman said. “But very soon it will be again like ‘OpenAI is so reckless. How can they be spending this crazy amount?’”

