In brief
- OpenMythos is a ground-up recreation of the Claude Mythos framework, pieced together solely from publicly available research and informed speculation.
- Claude Mythos is Anthropic’s most formidable AI, sealed away under Project Glasswing after independently uncovering 271 Firefox flaws and executing 32-step penetration attacks.
- The codebase serves as conceptual scaffolding—architecture without pre-trained weights. This mirrors a parallel initiative by Vidoc Security that recreated Mythos’s vulnerability discoveries using commodity models.
If Anthropic refuses to pull back the curtain on its riskiest AI, someone on GitHub will take their best shot at it.
Developer Kye Gomez has released OpenMythos, an open-source blueprint reconstructing what he believes Claude Mythos looks like internally. Within weeks of launch, it surged past 10,000 GitHub stars and comes alongside a detailed “readme” packed with formulas, references, and a courteous disclaimer clarifying it has zero affiliation with Anthropic.
It’s guesswork. But it’s methodical guesswork—expressed in actual code.
Here’s a rapid recap of what Mythos entails: Mythos entered the public eye in late March after Anthropic inadvertently published draft documentation labeling it the company’s most advanced model yet—a class above Opus. Its follow-up, Mythos Preview, turned out to be extraordinarily capable in cybersecurity, so much so that releasing it publicly wasn’t feasible.
According to Anthropic, Mythos detected 271 security flaws in Firefox during Mozilla testing. It became the first AI system to successfully complete a simulated 32-step corporate network attack. Anthropic confined it within Project Glasswing, a carefully screened group of roughly 40 partners spanning Microsoft, Apple, Amazon, and the NSA.
The general public never gets near it. So Gomez set out to reverse-engineer its inner workings.
OpenMythos’s core hypothesis centers on the idea that Mythos is a Recurrent-Depth Transformer—sometimes referred to as a looped transformer. Conventional models stack many distinct layers on top of each other. Looped models instead take a smaller group of layers and repeatedly cycle them through the same processing during each forward pass.
Put simply, identical weights undergo multiple iterations—enabling deeper reasoning within a continuous latent space before any output is produced.
The project contends this structure would account for Mythos’s two most unusual attributes: it reasons through novel challenges that stymie all other models, yet its capacity for rote memorization is inconsistent. That’s precisely the kind of behavioral signature looping produces—prioritizing synthesis over recall.
OpenMythos references Parcae, an April 2026 study co-authored by researchers from UC San Diego and Together AI, which solved the persistent instability issue that has plagued looped models. A 770-million-parameter Parcae model achieves quality comparable to a fixed-depth transformer with 1.3 billion parameters, along with clear scaling laws for how many loops to execute. The project also integrates DeepSeek’s Multi-Latent Attention for memory compression and a Mixture-of-Experts configuration to manage wide-ranging knowledge domains.
What it lacks is trained weights—making it, effectively, a blueprint without a functioning engine.
OpenMythos is purely theoretical. The code sketches model configurations ranging from 1 billion to 1 trillion parameters, but you’d need to train them from scratch yourself—the accompanying readme links to a 3-billion-parameter training script on FineWeb-Edu, targeting a Chinchilla-adjusted 30 billion-token benchmark. That’s the sort of computing expense that racks up into the six figures on H100 GPUs. No one has attempted it yet.
So why should anyone care?
Because it’s the second recent attempt to chip away at the secrecy surrounding Mythos. The first was a study from Vidoc Security that recreated several of Mythos’s most concerning vulnerability findings by orchestrating GPT-5.4 and Claude Opus 4.6 inside an open-source agent framework. No Glasswing credentials needed, and each scan cost under $30. Same takeaway from a different direction: The competitive moat Mythos enjoys may be narrower than the marketing implies.
OpenMythos and the Vidoc replication serve separate purposes. Vidoc recreated Mythos’s results—the specific vulnerabilities discovered—using off-the-shelf models. OpenMythos is attempting to recreate the architecture itself—the actual machinery generating those results. One argument says you don’t need Mythos to find the bugs Mythos found. The other suggests that, in time, you may be able to construct something resembling Mythos from scratch.
Anthropic almost certainly doesn’t endorse Gomez’s architectural theories, and several design decisions in OpenMythos are deliberately hedged—the readme is written with enough ambiguity that readers understand this is one plausible approach. It regularly employs words like “likely,” “suspected,” and “almost certainly.” The real Mythos may not be a looped transformer at all—or it could be one with details Gomez hasn’t yet unraveled.
What OpenMythos ultimately proves is that the open research landscape already holds most of the necessary components. Looped transformers, Mixture of Experts, Multi-Latent Attention, Adaptive Computation Time, Parcae’s stability breakthrough—nothing here is proprietary. The project is, above all, a comprehensive catalog of everything publicly known about engineering a Mythos-caliber model.
It carries the MIT license and has already been forked 2,700 times. The training script sits there, waiting for someone with access to a GPU cluster and the ambition to put it to the test.
Daily Debrief Newsletter
Kick off each morning with the biggest headlines right now, plus exclusive features, a podcast, videos, and more.



