Clément Hervé

Claude Mythos and the hype around LLMs

Anthropic’s latest model announcement made me laugh. "A system so extraordinarily capable, yet too powerful to be broadly released". The accompanying paper, dense, lengthy, and filled with ambitious claims, positions the model as a big step in AI capability, particularly in areas like cybersecurity. There is, however, only one question to ask : where does genuine scientific disclosure end and strategic marketing begin?

The message is paradoxical. On one hand, the company emphasizes the model’s advanced abilities, its potential to identify vulnerabilities, accelerate research, and outperform previous systems. On the other, it stresses restricted access, citing safety concerns and the need for controlled deployment. This dual narrative “it’s powerful, but we can’t show you” , aka "trust me bro" is hilarious. Without broad, verifiable benchmarks or open evaluation, claims of unprecedented capability are weightless.

However, the underlying concern isn’t unfounded. As with earlier generations of large language models, improved reasoning and technical fluency do have real implications for cybersecurity. A sufficiently skilled user could use such tools to accelerate vulnerability discovery or automate parts of exploit development. Yet this dynamic is not new. Each major leap in model capability has carried similar dual-use considerations, and the industry has been grappling with them for years.

This comes at a time when the economics of advanced AI are under increasing pressure. High-end subscriptions offer remarkable value to users, but they also raise questions about long-term sustainability. As usage grows and costs scale, companies are forced to balance accessibility with profitability, leading to tighter limits, revised quotas, or more segmented offerings.

Finally, this paper is coming from the very same company that leaked its CLI source code the previous week due to a misconfiguration in their bundler.

At some point, we ought to ask ourselves: when did we stop using our brains to let the AI do all the work?