Briefly
- Decentralized information layer Walrus is aiming to supply a “verifiable data foundation for AI workflows” together with the Sui stack.
- The Sui stack contains information availability and provenance layer Walrus, offchain setting Nautilus and entry management layer Seal.
- A number of AI groups have already chosen Walrus as their verifiable information platform, with Walrus functioning as “the data layer in a much larger AI stack.”
AI fashions are getting quicker, bigger, and extra succesful. However as their outputs start to form selections in finance, healthcare, enterprise software program, and past, an necessary query must be answered—can we really confirm the information and processes behind these outputs?
“Most AI systems rely on data pipelines that nobody outside the organization can independently verify,” states Rebecca Simmonds, Managing Govt of the Walrus Basis—an organization which helps the event of decentralized information layer Walrus.
As she explains, there isn’t any normal method to affirm the place information got here from, whether or not it was tampered with, or what was approved to be used within the pipeline. That hole would not simply create compliance threat—it erodes belief within the outputs AI produces.
“It’s about moving from ‘trust us’ to ‘verify this,'” Simmonds stated, “and that shift matters most in financial, legal, and regulated environments where auditability isn’t optional.”
Why centralized logs aren’t sufficient
Many AI deployments right now depend on centralized infrastructure and inside audit logs. Whereas these can present some visibility, they nonetheless require belief within the entity working the system.
Exterior stakeholders don’t have any alternative however to belief that the information have not been altered. With a decentralized information layer, integrity is anchored cryptographically, so unbiased events can confirm them with out counting on a single operator.
That is the place Walrus positions itself, as the information basis inside a broader structure known as the Sui Stack. Sui itself is a layer-1 blockchain community that information coverage occasions and receipts onchain, coordinating entry and logging verifiable exercise throughout the stack.
“Walrus is the data availability and provenance layer—where each dataset gets a unique ID derived from its contents,” Simmonds defined. “If the data changes by even a single byte, the ID changes. That makes it possible to verify that the data in a pipeline is exactly what it claims to be, hasn’t been altered, and remains available.”
Different parts of the Sui Stack construct on that basis. Nautilus lets builders run AI workloads in a safe offchain setting and generate proofs that may be checked onchain, whereas Seal handles entry management, letting groups outline and implement who can see or decrypt information, and beneath what situations.
“Sui then ties everything together by recording the rules and proofs onchain,” Simmonds said “That gives developers, auditors, and users a shared record they can independently check.”
“No single layer solves the full AI trust problem,” she added. “But together, they form something important: a verifiable data foundation for AI workflows—data with provable provenance, access you can enforce, computation you can attest to, and an immutable record of how everything was used.”
A number of AI groups have already chosen Walrus as their verifiable information platform, Simmonds stated, together with open-source AI agent platform elizaOS, and blockchain-native AI intelligence platform Zark Lab.
Autonomous brokers making monetary selections on unverifiable information. Take into consideration that for a second.
With Walrus, datasets, fashions, and content material are verifiable by default, so builders can safe AI platforms from potential regulatory non-compliance, inaccurate responses, and erosion…
— Walrus 🦭/acc (@WalrusProtocol) February 18, 2026
Verifiable, not infallible
The phrase “verifiable AI” can sound bold. However Simmonds is cautious about what it does—and would not—indicate.
“Verifiable AI doesn’t explain how a model reasons or guarantee the truth of its outputs,” she stated. However it may well “anchor workflows to datasets with provable provenance, integrity, and availability.” As a substitute of counting on vendor claims, she defined, groups can level to a cryptographic file of what information was out there and approved. When information is saved with content-derived identifiers, each modification produces a brand new, traceable model—permitting unbiased events to verify what inputs have been used and the way they have been dealt with.
This distinction is essential. Verifiability is not about promising excellent outcomes. It is about making the lifecycle of knowledge—the way it was saved, accessed, and modified—clear and auditable. And as AI programs transfer into regulated or high-stakes environments, this transparency turns into more and more necessary.
Why does @WalrusProtocol exist.
As a result of companies that want programmable storage with verifiable information integrity and assured availability had nowhere to go.
We constructed it and so they hold exhibiting up. Easy as that!! pic.twitter.com/Ygxe8CFenh
— rebecca simmonds 🦭/acc (@RJ_Simmonds) February 12, 2026
“Finance is a pressing use case,” Simmonds stated, the place “small data errors” can flip into actual losses due to opaque information pipelines.“Being able to prove data provenance and integrity across those pipelines is a meaningful step toward the kind of trust these systems demand,” she said, adding that it “isn’t limited to finance. Any domain where decisions have consequences— healthcare, legal—benefits from infrastructure that can show what data was available and authorized.”
A practical starting point
For teams interested in experimenting with verifiable infrastructure, Simmonds suggests starting with the data layer as a “first step” reasonably than making an attempt a wholesale overhaul.
“Many AI deployments depend on centralized storage that is actually tough for exterior stakeholders to independently audit,” she stated. “By moving critical datasets onto content-addressed storage like Walrus, organizations can establish verifiable data provenance and availability—which is the foundation everything else builds on.”
In the coming year, one of the focuses for Walrus is expanding the partners and builders on the platform. “Some of the most exciting stuff is what we’re seeing developers build—from decentralized AI agent memory systems to new tools for prototyping and publishing on verifiable infrastructure,” she stated. “In some ways, the group is main the cost, organically.”
“We see Walrus as the data layer in a much larger AI stack,” Simmonds added. “We’re not trying to be the whole answer—we’re building the verifiable foundation that the rest of the stack depends on. When that layer is right, new kinds of AI workflows become possible.”
Each day Debrief Publication
Begin on daily basis with the highest information tales proper now, plus unique options, a podcast, movies and extra.



