Disclaimer: SpendNode is for informational purposes only and is not a financial advisor. Some links on this site are affiliate links - we may earn a commission at no extra cost to you. This does not affect our data or rankings. Affiliate DisclosureView Policy
Crypto News

Vitalik Buterin Wants Your Personal AI to Vote in DAOs for You, Protected by Zero-Knowledge Proofs and Prediction Markets

Updated: Feb 22, 2026By SpendNode Editorial
DisclaimerThis article is provided for informational purposes only and does not constitute financial advice. All fee, limit, and reward data is based on issuer-published documentation as of the date of verification.

Key Analysis

Vitalik Buterin proposes AI stewards that vote on DAO decisions using your values, shielded by ZK proofs and MPC. Here is how the four-layer system works.

Vitalik Buterin Wants Your Personal AI to Vote in DAOs for You, Protected by Zero-Knowledge Proofs and Prediction Markets

Ethereum cofounder Vitalik Buterin published a proposal on February 21, 2026 that would replace the current model of DAO governance, where most token holders either ignore votes or blindly delegate to whales, with a system of personal AI agents trained on each user's values and protected by zero-knowledge proofs. The framework lays out four distinct mechanisms designed to solve what Buterin calls the fundamental bottleneck in decentralized governance: human attention.

The Attention Problem That Broke DAO Voting

DAOs face thousands of decisions spanning protocol parameters, treasury allocations, grant proposals, dispute resolution, and personnel changes. No individual has the time or domain expertise to evaluate even a fraction of them. The result, as of early 2026, is a governance landscape where participation rates routinely fall below 10 percent and a small cluster of delegates controls outcomes.

Buterin has been vocal about this failure mode. Roughly a month before publishing the AI stewards framework, he publicly criticized DAOs as little more than "a treasury controlled by token holder voting," calling the model inefficient and manipulation-prone. The new proposal is his most detailed technical answer to date.

The core argument is simple: if the bottleneck is human attention, then scale attention with AI. But the implementation is anything but simple. It requires four interlocking layers, each solving a different piece of the governance puzzle.

Four Mechanisms, One Governance Stack

Layer 1: Personal Governance Agents

Each user deploys a personal large language model trained on their writing history, conversation logs, and explicitly stated values. The agent casts votes across every DAO decision the user participates in, learning over time which trade-offs the user favors. When the agent encounters a decision it cannot confidently resolve, it queries the user directly, providing full context so the human only needs to weigh in on genuinely ambiguous questions.

This is not delegation in the traditional sense. Delegation hands your vote to another human whose interests may diverge from yours over time. A personal agent, in theory, maintains alignment with your own stated preferences indefinitely.

Layer 2: Public Conversation Agents

Before individual agents cast votes, public conversation agents aggregate information from many participants. These systems summarize competing viewpoints, convert them into shareable formats without exposing private information, and identify areas of consensus. Buterin compares this to "LLM-enhanced Polis systems," referencing the open-source opinion mapping tool that Taiwan's government has used for public consultations.

The goal is to ensure that every personal agent operates on the same information baseline rather than voting on incomplete or asymmetric data.

Layer 3: Suggestion Markets

This is where prediction markets enter the stack. AI agents can propose governance actions and wager tokens on whether the DAO will accept them. When a mechanism accepts an input, it pays out to the agents that identified the opportunity. Successful predictions earn rewards, while low-quality or spam submissions cost their creators.

The design directly targets the spam problem that has worsened as AI-generated content floods open governance forums. By attaching financial stakes to proposals, the system filters signal from noise without requiring human moderators.

Layer 4: Multi-Party Computation for Sensitive Decisions

Some DAO decisions require access to private information, such as job applications, legal disputes, or confidential financial data. Buterin proposed that users submit their personal LLM into a secure computation environment where the model can evaluate private inputs and output only the final judgment, never the underlying data.

As Buterin put it: "You submit your personal LLM into a black box, the LLM sees private info, it makes a judgement based on that, and it outputs only that judgement." Zero-knowledge proofs verify that the computation was performed correctly without revealing what was computed.

Why Zero-Knowledge Proofs Are the Backbone

Every layer of this system depends on privacy guarantees. Without them, AI-assisted governance becomes a surveillance tool rather than a participation tool.

ZK proofs allow a voter to prove they hold the required tokens and are eligible to vote without revealing their wallet address, their vote choice, or any other identifying information. This is critical for two reasons. First, it prevents vote buying and coercion. If nobody can verify how you voted, nobody can pay you to vote a certain way. Second, it protects against whale-watching, a common pattern in which smaller holders simply copy the votes of large token holders, defeating the purpose of broad participation.

Buterin also called for trusted execution environments (TEEs) and multi-party computation alongside ZK proofs, creating multiple layers of privacy protection. For anyone holding crypto in self-custody wallets, this aligns with the same ethos: your keys, your data, your vote.

What This Means for Ethereum and DeFi Governance

The proposal arrives at a moment when DAO governance fatigue is measurable. Aave, one of the largest DeFi protocols, recently saw BGD Labs walk away after four years of contributing to governance, citing tensions over fees and brand control. Uniswap Labs just shipped AI agent skills for DeFi automation, but those tools handle swaps and liquidity, not governance participation.

If Buterin's framework were adopted, every Ethereum-based DAO could theoretically see participation rates jump from single digits to near-universal coverage, because participation no longer requires active human engagement on every decision.

The implications extend beyond Ethereum. Any chain running DAO governance, including Solana-based protocols, Cosmos app chains, and Layer 2 ecosystems, could implement similar systems. The framework is chain-agnostic in principle, though the ZK proof infrastructure is most mature on Ethereum today.

The Risks Nobody Is Talking About Yet

AI alignment in governance carries risks that Buterin's proposal acknowledges only partially. If a personal agent is trained on biased or incomplete data, it will cast biased votes at scale, and the user may never notice because the system is designed to minimize human intervention.

There is also the question of agent homogeneity. If most users rely on similar base models (GPT, Claude, Llama), voting patterns could converge in ways that reduce genuine diversity of thought. The illusion of broad participation could mask a new form of centralization, not around whales, but around the training data of a few foundation models.

Finally, prediction markets for governance introduce financial incentives that could attract adversarial actors. A well-funded attacker could manipulate suggestion markets to push proposals that benefit their positions, using the market mechanism itself as the attack surface.

None of these risks are fatal to the concept. But they mean that any real-world implementation will need safeguards that go beyond what the current proposal outlines.

FAQ

Is Buterin's AI steward proposal live on Ethereum today? No. This is a conceptual framework published on February 21, 2026. No Ethereum Improvement Proposal (EIP) has been filed, and no timeline for implementation has been announced.

Would I need to run my own AI model? In theory, yes. Each user would deploy a personal LLM trained on their values. In practice, hosted solutions and lightweight local models would likely emerge, similar to how wallets abstract away private key management today.

How do zero-knowledge proofs protect my vote? ZK proofs let you prove you are eligible to vote and that your vote was counted correctly without revealing your identity, wallet address, or vote choice to anyone, including the DAO itself.

Could an AI agent vote against my interests? Yes, if it is poorly trained or if your stated values do not capture your actual preferences on a specific issue. The framework includes a fallback where the agent escalates uncertain decisions to the human user.

Overview

Vitalik Buterin proposed a four-layer AI governance framework for DAOs on February 21, 2026. The system uses personal AI agents to cast votes based on user values, public conversation agents to equalize information, prediction markets to filter spam proposals, and multi-party computation to handle sensitive decisions privately. Zero-knowledge proofs protect voter identity and prevent coercion throughout. The proposal addresses the attention bottleneck that has left most DAOs with sub-10 percent participation rates and governance power concentrated among a few large delegates. No EIP or implementation timeline has been announced.

Recommended Reading

Sources

Have a question or update?

Discuss this analysis with the community on X.

Discuss on X

Comments

Comments are moderated and may take a moment to appear.

Loading comments...