Redistribution Is Security Architecture for the Intelligence Age
The AI safety conversation is missing a critical conclusion. Redistribution is not only a social policy preference. It is security architecture for the intelligence age.
Updated April 2026
By David Casey
The resource being concentrated is not just wealth. It is the power to decide. Redistribution here means distributing governance authority, cognitive sovereignty, and infrastructure control alongside economic surplus.
AI is generating economic surplus at a scale that has no historical precedent. The structures we have for distributing that value were designed for an industrial economy. Absent intervention, the surplus concentrates. And here is what separates this from every previous concentration: the same technology generating trillions in surplus simultaneously arms the people excluded from it. The attack surface grows with every advance in the capability. You cannot secure a perimeter when the number of people with reason to breach it scales with the technology itself. That is a security assessment. It is also, inevitably, a claim about how power should be distributed. This essay argues both, and is honest about where the structural analysis ends and the political judgment begins.
The architecture this essay proposes has four layers. Commons governance, scaled through intelligent systems using Elinor Ostrom’s principles and David Dao’s automation taxonomy. Graduated agent autonomy, where AI systems earn decision-making scope through demonstrated alignment with the communities they serve. Decentralized rails that cannot be captured by the entities they redistribute from. And sovereign infrastructure at the model layer, so no population’s cognitive capacity depends on systems they don’t control. Each layer is already being built. None of them work in isolation. This essay maps how they integrate.
The thesis draws on five researchers who have each mapped a piece of the problem. Luke Drago and Rudolf Laine diagnosed AI surplus concentration as a structural analogue to the resource curse. Dario Amodei warned, from inside Anthropic, that the dangers extend to biological weapons and authoritarian capture. Vitalik Buterin identified the governance constraints on autonomous agents. David Dao built a framework for scaling commons governance with intelligent systems. Trent McConaghy articulated the end state where autonomous infrastructure simply provides, the way nature does. What hasn’t been mapped is how these elements integrate into a single living architecture. That’s what we began building at Frontier Tower in San Francisco in March 2026, and this essay is the thesis behind the attempt.
The Intelligence Curse
Luke Drago and Rudolf Laine’s Intelligence Curse framework names the pattern. The parallel is to the resource curse in development economics, where countries that discover massive natural wealth often end up with worse outcomes because the wealth concentrates rather than distributes. AI is the new resource. The curse is the same.

Source: Andrew Warner, “Natural Resource Booms in the Modern Era: Is the Curse Still Alive?” IMF Working Paper WP/15/237, November 2015, Figure 5. During resource booms, total GDP surges while non-resource GDP flatlines - the domestic economy never captures the wealth. imf.org
I run Funding the Commons. We work with UNDP, Protocol Labs, The Tor Project, and communities around the world designing mechanisms for distributing collective resources. The Intelligence Curse crystallizes something our community has been circling for years. Drago and Laine’s framework (avert catastrophe technically, diffuse AI to keep humans relevant, democratize institutions) is the right skeleton. This essay adds the muscle.
Jacques de Gheyn II, Vanitas Still Life, 1603. Oil on wood. The Metropolitan Museum of Art, New York. metmuseum.org
The attack surface scales with the technology itself
This is what separates the Intelligence Curse from every previous resource curse, and it should keep security professionals up at night.
In a petrostate, the displaced population has limited tools. They can protest. They can organize politically. But they don’t have the oil (or the advanced security apparatus it finances). The resource that’s concentrating is not in their hands.
With AI, the dynamic inverts. The same capabilities generating trillions in surplus are simultaneously becoming available to the billions of people cut out of that value. Not in a decade. Now. Tools for automated cyberattacks, synthetic media and coordinated disruption improve quarterly and diffuse faster than any previous technology.
The risk extends beyond digital disruption. Dario Amodei, CEO of Anthropic and one of the people with the clearest view of frontier model capabilities, warned in January 2026 that sufficiently powerful AI could enable individuals or small groups to synthesize biological agents. This isn’t speculative alarmism from the outside. It’s the assessment of someone building the systems in question. When the head of a frontier lab tells you the attack surface extends to biological weapons, the security case for redistribution stops being abstract. You can’t build a perimeter against that. The only viable security architecture is one where fewer people have reason to breach it.
In April 2026, the thesis stopped being theoretical. Anthropic’s new frontier model, Claude Mythos, autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser, including flaws that had escaped detection for decades. The capabilities emerged as a byproduct of general improvements in reasoning, not from explicit training. Anthropic chose not to release the model publicly. Instead, it launched Project Glasswing: early access restricted to a coalition of the world’s largest technology and financial companies, plus organizations maintaining critical open-source infrastructure. The most powerful tool yet built is offensive and defensive simultaneously, because finding a vulnerability and exploiting it are the same capability. It now sits exclusively with the organizations that are already the most capitalized and best defended. Everyone else inherits the expanded attack surface without the tools to address it. This is the Intelligence Curse in a single product decision.
Giovanni Battista Piranesi, The Gothic Arch (Plate 14), from Le Carceri d’Invenzione, 1750. Etching. Wikimedia Commons
Concentration doesn’t just produce inequality. It produces an attack surface that scales with the technology itself. The more capable AI becomes, the more surplus it generates, the more it concentrates, and the more powerful the tools available to those with reason to act on their exclusion.
You can’t secure a perimeter when the number of people with reason to breach it grows with every advance in the technology they’ve been excluded from. That’s the structural argument, not a moral one.
Which leads to a conclusion almost nobody in the AI safety conversation is stating plainly: redistribution is not only a social policy preference. It is security architecture. Not because it eliminates all threats. Ideological extremism, state-sponsored cyberwarfare, and individual pathology will persist regardless of how wealth distributes. Redistribution addresses the threat category that scales: the billions of economically displaced people gaining access to increasingly powerful tools. You cannot neutralize every motivated attacker. You can reduce the structural conditions that mass-produce them.
A necessary distinction. This is not an argument about existential risk as the AI safety community uses that term. A single misaligned system or engineered pathogen is a non-recoverable failure, and redistribution does not prevent it. Tight capability control might. The threat class this essay addresses is systemic instability generated by mass exclusion, compounded by capability diffusion. That threat is recoverable in theory. In practice, it destroys economic value, erodes institutional legitimacy, and degrades the political and market conditions under which safety research, AI development, and productive enterprise all depend. The resource curse countries didn’t just suffer inequality. They suffered economic collapse. The two threat classes are not in competition, but the second has been systematically underweighted because it looks like politics rather than engineering. It is both.
And for the threats that persist no matter what, the response isn’t a fortress. It’s a distributed network. Community-governed systems that can take a hit and keep running.
Perimeters protect until they’re breached. Immune systems adapt.
Immune systems can also attack themselves. A distributed network of locally governed agents could amplify a local grievance into a coordinated assault, or a community could deliberately align its agent with objectives hostile to the broader network. This is why graduated autonomy includes circuit breakers at the network level. Bounded treasuries so no single failure is catastrophic. Decision logs that are transparent by default. And the ability for neighboring nodes to quarantine a rogue agent before it spreads. The immune system needs the ability to identify and isolate its own malfunctioning cells.
Safety and redistribution are a single loop
The AI safety community is building the trust infrastructure that makes broad deployment possible: evaluation frameworks, governance standards, safety benchmarks. Without it, enterprise buyers don’t deploy, regulators don’t approve, and the surplus never materializes.
But safety infrastructure depends on redistribution for its own viability. If surplus concentrates and creates instability - regulatory backlash, political instability, an exponentially growing attack surface - all of that constrains the very market that safety companies need to exist.
Break any link and the architecture fails. That’s the thesis, anyway. The AI safety researcher and the redistribution mechanism designer are not working on adjacent problems. They’re working on two halves of the same system.
Theodoros Pelecanos, Ouroboros, 1478. Drawing from a Byzantine Greek alchemical manuscript (Codex Parisinus graecus 2327). Wikimedia Commons
But the WMD-level capability argument cuts the other way: shouldn’t that demand centralized control, full stop? If frontier models can synthesize bioweapons or discover zero-day vulnerabilities at scale, the responsible response is restriction, not redistribution. Keep the models locked, keep the capabilities concentrated, and accept the authoritarian implications as the lesser evil. It is also structurally doomed. Mathematical models cannot be permanently embargoed. The Transformer architecture was published in 2017. Within six years, every major government and dozens of private labs had reproduced and extended it. The capabilities Anthropic discovered in Mythos will be independently reproduced by other labs, by state actors, by open-source communities operating outside any single jurisdiction. A centralized perimeter buys time. It does not buy safety. The counterargument deserves its full weight. If alignment is solvable on a short enough timeline, and the perimeter holds long enough, then tight control is the correct strategy even at the cost of concentration and the tensions that follow. This essay argues that bet is worse than it appears: perimeters have historically leaked faster than the technologies they contain, the instability produced by concentration actively undermines the conditions for the research and economic productivity the perimeter is meant to protect, and the track record of centralized resource control is uniformly catastrophic. That is a judgment on a genuine tradeoff, not a proof. The question is what you build during the time the perimeter holds. If the answer is ‘nothing,’ you’ve delayed the problem. If the answer is distributed, community-aligned governance infrastructure that makes the capabilities safer to deploy broadly, you’ve used the window to build the immune system the world will need when the perimeter inevitably fails.
And there’s an engineering corollary.
If redistribution prevents the attack surface from scaling, then the redistribution rails cannot be controlled by the same entities concentrating the surplus. They have structural incentive to block or slow distribution.
The danger isn’t only corporate. Amodei describes a scenario that should be familiar to anyone who has studied the resource curse (or read science fiction): a state captures AI surplus and uses it to build an intelligence-powered authoritarian apparatus. Surveillance scaled by AI. Dissent predicted and preempted. Economic participation conditioned on compliance. And a capability no petrostate ever possessed: AI-powered persuasion that can shape what individuals believe, want, and perceive as possible, not at the population level but at the level of each person. Illia Polosukhin, co-author of the Transformer paper that made generative AI possible, describes this directly: generative AI creates a universal and scalable personal method of enabling control and manipulation. Your information environment can ensure you form a specific opinion. If that vector of manipulation can be extracted or bought, it will be, and it becomes a tool for control. If governments can access it, they will use it to maintain power. The depth of potential manipulation, Polosukhin warns, goes to the level of each and every human. NATO is actively developing a doctrine that treats the cognitive domain as a sixth domain of warfare, alongside land, sea, air, space, and cyber. The cognitive warfare threat is not limited to authoritarian states. The US forced the sale of TikTok on the grounds that a foreign power’s control over 170 million Americans’ information environment constituted a national security threat, the clearest acknowledgment by a Western democracy that foreign digital infrastructure is a vector for cognitive influence at population scale. The authoritarian state armed with AI doesn’t just surveil, restrict economic access, and punish dissent. It can engineer consensus. Without cognitive sovereignty, political and economic sovereignty are hollow. This is petrostate dynamics with a cognitive upgrade. The resource curse didn’t just produce inequality in oil-rich nations - it produced authoritarian governments funded by the resource itself. The Intelligence Curse carries the same structural risk, except the tool for maintaining control is orders of magnitude more capable than anything oil revenue could buy. Redistribution infrastructure that runs through state-controlled channels is redistribution infrastructure that can be weaponized.
Boris Iofan, Palace of the Soviets, architectural rendering, 1934. The planned tallest structure on Earth: a 415-meter monument to centralized power crowned by a 100-meter Lenin. They demolished the Cathedral of Christ the Saviour to build it, yet was never completed. The foundation pit became a swimming pool.
None of this theoretically requires crypto-economic infrastructure. Sovereign states already possess the ultimate redistribution mechanism: taxation and the welfare state. Tax the AI companies. Fund public services. The machinery exists. This argument works in countries where the state apparatus functions. It describes perhaps 40 of the world’s 195 nations. For the other 155, the state is the problem the infrastructure needs to survive. In the countries where this infrastructure matters most, the tax authority is captured. Courts don’t work. Half the population has no bank account. And the development aid that used to compensate for all of that is being pulled. Even in functioning states, the velocity problem is real. I learned this firsthand running a festival in Guatemala. Every December, our ticket revenue would be stuck in US banking holidays while we needed cash on the ground to pay security. Bitcoin cut the transfer from ten days to thirty minutes. That was 2013. The velocity gap between the systems we have and the economy we’re building has only gotten wider. Traditional tax and welfare systems operate on annual cycles: assess, collect, appropriate, disburse. The agent economy operates in milliseconds. Millions of autonomous micro-transactions per day, crossing jurisdictions, settling in stablecoins, executing smart contracts. No existing tax apparatus can assess and capture value at that speed. The infrastructure that governs the agent economy must be natively digital and programmable, or it will simply be bypassed.
Centralized public digital infrastructure can work at the national level. India’s UPI and Brazil’s Pix process billions of transactions and have achieved near-total domestic market capture. The problem arises acutely at the boundaries: cross-border coordination between sovereign systems requires infrastructure that no single state controls, that cannot be weaponized by the provider’s government, and that remains operational when geopolitical alliances shift. India is actively exporting UPI to Singapore, the UAE, and France. Brazil’s Pix is integrating into BRICS Pay. But state-backed cross-border systems carry the geopolitical baggage of their sponsors. BRICS Pay is not neutral infrastructure. It is a geopolitical bloc’s financial weapon, the mirror image of SWIFT.
Only credibly neutral, decentralized rails can route value globally without functioning as a vector for statecraft. This is why programmable smart contracts on distributed ledgers aren’t a crypto enthusiasm. They’re an engineering requirement for the coordination layer between sovereign systems. Sovereignty doesn’t mean autarky. It means the capacity to participate in global systems without depending on any single actor for permission.
The events of early 2026 made this concrete. Back in 2022, the US and EU froze $300 billion in Russian central bank reserves and disconnected Russian banks from SWIFT, demonstrating that financial infrastructure the world treated as neutral was, in fact, a weapon. The response was not a return to multilateralism. It was a scramble for alternatives. By April 2026, Iran’s Revolutionary Guard was collecting tolls on oil tankers transiting the Strait of Hormuz in Tether, Bitcoin, and Chinese yuan, codified into law by Iran’s parliament. But the escape from dollar hegemony is not what it appears. Shanaka Anslem Perera’s analysis of what he calls the Maduro Paradox reveals a deeper mechanism. Under comprehensive US sanctions, Venezuela didn’t pivot away from the dollar. Eighty percent of its oil sales shifted to USDT, a dollar-denominated stablecoin backed by US Treasury bills and freezable by a single private company. Eight days after Maduro’s capture in January 2026, Tether froze $182 million in wallets believed to be linked to Venezuelan oil transactions, no court order required. That company has $133 billion in reserves managed by Cantor Fitzgerald, the firm previously led by sitting US Commerce Secretary Howard Lutnick, whose children’s trust acquired control of Cantor through a loan from Tether itself, secured in part by a $600 million convertible bond entitling Cantor to a 5% stake in Tether. The mechanism that was supposed to enable sanctions evasion produced an outcome indistinguishable from sanctions enforcement. Whether by design or structural inevitability, dollar dependence deepened, surveillance capability expanded, and enforcement occurred faster than SWIFT ever managed. The US didn’t drift into this architecture. It legislated it: banning a Federal Reserve CBDC while codifying private digital money through the GENIUS Act. In 2025, World Liberty Financial, backed by the sitting US president’s family, launched USD1 - a dollar stablecoin whose issuer’s financial interests are now embedded in the infrastructure his administration is legislating into dominance. Citi projects the stablecoin market reaching $3.7 trillion by 2030, and Treasury Secretary Bessent endorses that figure. The European Parliament calls it “cryptomercantilism.” The IMF’s Hélène Rey describes it as “privatization of seigniorage.” The infrastructure that governs how value flows is being captured in real time. The alternative is not another state-controlled system. It is open-source, auditable, capture-resistant infrastructure that no single state or corporation controls. To borrow the Long Now Foundation’s framing: the question isn’t how we distribute surplus in 2027. It’s whether the infrastructure we build now will still be operating in 2127. Centralized rails are subject to the political economy of the moment.
The only redistribution rails that work are the ones that can’t be turned off by the people who benefit from concentration.
Automated. Censorship-resistant. Tied directly to surplus-generating models through on-chain logic that anyone can audit.
But knowing that the rails need to be decentralized doesn’t tell us how to govern what flows through them. This is where the conversation usually stalls - and where David Dao’s work becomes essential.
Scaling commons governance: from Ostrom to intelligent systems
Nobel laureate Elinor Ostrom showed that local communities can sustainably manage shared resources through self-governance. Her eight design principles - clear boundaries, local rules, collective choice, monitoring, graduated sanctions, conflict resolution, external recognition, and nested systems - have worked for centuries in communities from Swiss alpine pastures to Japanese forests to Maine lobster fisheries.
Pieter Bruegel the Elder, The Harvesters, 1565. Oil on wood. The Metropolitan Museum of Art, New York. metmuseum.org
The problem is that Ostrom’s principles break down at scale. As Dao argues in his work on Regenerative Intelligence, the very features that make self-governance effective locally - direct relationships, shared context, face-to-face interaction - become barriers past Dunbar’s number of roughly 150 stable social relationships. The number of possible relationships in a group grows combinatorially: 150 people means 11,175 possible relationships to track. Scale to 1,500 and you’re at over a million.
Dao’s crucial insight is that not all governance functions scale the same way. Some of Ostrom’s principles can be handled through intelligent automation: monitoring (sensors and pattern detection can watch over resources with minimal human intervention), clear boundaries (digital tools can verify membership and control access), and nested systems (coordination between governance levels can be automated through smart contracts). Others require human judgment and can only be augmented, not replaced: conflict resolution, local rule-making, and capacity building all depend on relationships, cultural context, and wisdom that machines can’t replicate.
When I say “deploy quadratic funding and conviction voting to govern community-owned AI infrastructure,” Dao’s framework specifies what that actually means: automate monitoring and boundary-enforcement, augment collective choice with AI-assisted preference aggregation, keep conflict resolution and rule-making in human hands.
Dao calls this approach Regenerative Intelligence: the design of sociotechnical intelligent systems that preserve or enhance social capital, trust, and agency while scaling governance capabilities. This taxonomy is early but it’s the best map we have. His non-profit GainForest has deployed this framework across 30 communities globally, from the Amazon rainforest to the Southern Philippines. Their Conservation Data Income mechanism pays communities based on the quality and quantity of environmental data they collect, creating a feedback loop where participation in governance generates income, which funds better tools, which improves governance capacity. Their AI assistant Taina was co-designed with Indigenous communities through collaborative constitutional AI workshops - community members directly craft the system prompts that define the agent’s values and behavior, and local instances run on community-owned hardware with full data sovereignty.
This isn’t a theoretical framework. It’s running. And it won an XPRIZE.

Source: David Dao, “Governing the Commons in the Intelligent Age,” January 2025. The Self-Improving Sociotechnical Loop (SISL): communities gather data, which improves AI tools, which builds local capacity, which generates utility - and the cycle repeats. daviddao.org

Source: David Dao, “Governing the Commons in the Intelligent Age,” January 2025. Ecological hypercerts link identity, claims, and evidence into a high-quality data standard that Regenerative Intelligence systems can track, evaluate, and improve over time. daviddao.org
The models themselves need to be commons infrastructure
Drago’s framework addresses both the distribution of surplus and the diffusion of the models themselves, including open-source AI, distributed training, and local compute. This essay extends that framework into the governance layer: how communities govern sovereign model infrastructure using commons principles, and what role AI agents themselves should play in that governance.
The standard macroeconomic rebuttal is that AI is not oil. Oil is a static commodity that concentrates through extraction. AI is a general purpose technology that diffuses benefit broadly through cost reduction. Every person with access to a frontier model gains consumer surplus: cheaper medical advice, legal reasoning, educational content, productivity tools. The benefit genuinely diffuses. But what consumer surplus does not provide is sovereignty.
Cheap access to a centralized API makes you a consumer, not a participant. The provider can change pricing overnight, restrict access by region, alter model behavior to comply with a government request, or shut down entirely. Consumer surplus without structural sovereignty is dependence dressed as abundance.
Dependence on Amazon for cheap goods is uncomfortable when terms change. Dependence on an external entity for your population’s fundamental capacity to reason, perceive reality, and form political opinions is a sovereign extinction event. This is what separates AI dependence from every previous platform dependency. When the infrastructure you rely on can shape what your citizens believe, the distinction between consumer and subject dissolves. The history of technology platforms is littered with communities that built their livelihoods on infrastructure they didn’t control, only to have the terms change underneath them. AI will be no different unless the infrastructure itself is diverse, distributed, and sovereign.
We must go a layer deeper. If your population’s cognitive infrastructure depends on systems you don’t control? Trained on data that doesn’t represent you, in languages that aren’t yours, governed by boards you’ll never sit on? The dependency is the vulnerability. Before a single dollar of surplus gets distributed or doesn’t.
Vladan Joler and Kate Crawford, Anatomy of an AI System, 2018. The full material lifecycle of a single Amazon Echo, mapped from rare earth extraction through assembly, data labor, and e-waste disposal. anatomyof.ai
A government pressures a company to restrict model access in a particular region. A model trained predominantly on English-language data becomes the default intelligence layer for communities whose languages and knowledge systems are absent from the training set. The model doesn’t need to be malicious to be colonial. It just needs to be default.
The response to the resource curse in development economics to “redistribute the oil revenue better” is only part of the story. It extends to: diversify the economy so you’re not dependent on a single extractive resource controlled by a single set of actors.
The parallel has become literal. The petrostates that concentrated oil wealth are now converting it directly into AI compute infrastructure. Saudi Arabia has committed over $20 billion to AI. The UAE is building a 1-gigawatt AI compute cluster backed by G42, OpenAI, Oracle, and NVIDIA. Over 18,000 NVIDIA Blackwell GPUs are being deployed across the Gulf. All funded by oil revenue. All dependent on US chip exports for the hardware and US companies for the models that run on it. The resource curse isn’t just an analogy for the Intelligence Curse. It is feeding directly into it. The same concentrated extractive wealth that produced authoritarian petrostates is now purchasing the next generation of concentrated power, except the new resource is intelligence rather than energy, and the dependency on foreign infrastructure means the sovereignty is nominal.
Diego Rivera, Man, Controller of the Universe, 1934. Fresco, Palacio de Bellas Artes, Mexico City. A worker stands at the center of industrial machinery, flanked by capitalism’s decay and socialism’s promise, choosing which future to build. Rivera was commissioned to paint this for Rockefeller Center in 1933; Nelson Rockefeller ordered it destroyed that same year when Rivera refused to remove a portrait of Lenin. This is a 1934 replica Rivera made.
Francisco de Goya, Todos Caerán (All Will Fall), Plate 19 from Los Caprichos, 1799. Etching and aquatint. The Metropolitan Museum of Art, New York. metmuseum.org
The response to the Intelligence Curse has to include the same structural move. The intelligence infrastructure itself needs to be diverse, distributed, and sovereign. This does not mean open-sourcing every frontier capability unconditionally. I advocate for commons infrastructure while also documenting why Anthropic restricted Mythos: some capabilities are genuinely dangerous to distribute broadly. The resolution is not a binary between corporate monopoly and unrestricted release. It’s graduated access with transparent rules. Open-source the base models and coordination infrastructure. Restrict the most dangerous capabilities through governance that is itself open and auditable. And make sure no single actor controls the restriction decision permanently. The perimeter must exist. It just cannot be owned.
The physical vulnerability is not hypothetical. In March 2026, Iranian drones struck AWS data centers in the UAE and Bahrain, the first kinetic military attacks on hyperscale cloud infrastructure in history. Banking, payments, and enterprise services across the Gulf went offline. Iran subsequently threatened strikes against more than a dozen US technology companies. Gulf states became collateral, caught in a geopolitical crossfire because their critical infrastructure was entangled with a belligerent’s. And kinetic attack is only one vector. The provider itself can cut access: the same logic that disconnected Russian banks from SWIFT applies to any cloud relationship where the provider’s government has geopolitical interests that diverge from the client’s. If your sovereign AI stack runs on someone else’s cloud, it can be destroyed by their enemy, or switched off by your ally. Sovereignty requires physical and logical independence from both.
The structural reality: sovereignty at the physical layer is harder than sovereignty at the software layer. Chip fabrication is concentrated in two countries. Undersea cables are owned by a handful of corporations. Energy grids remain state-controlled. No community is fabricating 3nm processors. Full physical independence is not achievable for most nations in the near term, let alone for communities. But sovereignty is a spectrum, not a binary. The trajectory of compute is toward smaller, more efficient models running on consumer-grade hardware at the edge. Federated architectures distribute workloads across multiple providers and jurisdictions, so no single point of failure is fatal. Geographic redundancy means no single drone strike takes a country offline. The AWS attack proved that concentration is the vulnerability. The response is not to pretend every village will own a data center. It is to distribute the physical layer enough that no single actor controls it, while building the governance and coordination software that makes distributed infrastructure usable. This essay’s thesis operates at the coordination layer. The physical layer is a constraint it acknowledges, not a problem it claims to solve. The jurisdictional layer is a deeper constraint still. The entities controlling critical digital infrastructure are already seeking jurisdictions beyond state reach: Tether operating from the BVI, Starlink controlling communications for 100+ countries from satellites no single nation governs. As compute moves toward orbit and AI infrastructure operates extra-territorially, coordination infrastructure designed for territorial jurisdiction will fail. What that infrastructure looks like when jurisdiction itself is contested is a question this essay’s framework raises but does not yet answer.
This is Buterin’s d/acc principle applied to the intelligence layer: build technology that tips the balance toward defense and resilience without concentrating power in a central authority. GainForest’s deployments prove this is already operational. Communities in the Amazon are already running sovereign AI infrastructure. Local model instances on community-owned hardware. Constitutional AI that Indigenous communities co-designed themselves. Full data sovereignty. Their Taina agent, governed by an Indigenous data council, is the same category of sovereign defense mechanism as Japan’s $10 billion Microsoft localization mandate or the EU’s sovereign cloud initiative. The scale differs. The structural logic is identical. Their ecological hypercerts - blockchain-based impact certificates linking identity, claims, and evidence - create the data standards that make decentralized impact evaluation possible.
These aren’t speculative proposals. They’re existence proofs that sovereign, community-governed AI infrastructure can work.
Commons agents: governance architecture for the transition
If redistribution is security architecture and the rails must be decentralized, a structural question remains: what role should AI agents themselves play in governing the commons?
Peter Paul Rubens, Prometheus Bound, 1636. Oil on canvas. Museo Nacional del Prado, Madrid. Wikimedia Commons
The urgency is no longer theoretical. According to KPMG’s Q1 2026 AI Pulse Survey, over half of organizations are now actively deploying AI agents, with average US spending of $207 million per organization over twelve months, nearly double year-on-year. Gartner projects $53 billion in agent-enabled supply chain software by 2030. The agents are already transacting. The governance infrastructure is not.
The debate is live, and it has accelerated dramatically. In early 2026, five competing agent payment protocols launched within 90 days: Visa TAP, Google’s Agent Payments Protocol, Coinbase’s x402 (now under the Linux Foundation), Stripe and OpenAI’s Agentic Commerce Protocol serving ChatGPT’s 700 million weekly users, and PayPal’s Agent Ready. The coordination layer of the agent economy is being built right now, by incumbents. The protocols are going open-source, but the settlement layer is already being captured: USDC handles 98.6% of on-chain agent transactions. The governance gap between open protocols and concentrated settlement is where the opportunity lies.
In February, Sigil Wen launched Automatons - fully autonomous AI agents that own wallets, generate revenue, and operate without human approval. Within days, Vitalik Buterin responded with a sharp critique: expanding agent autonomy without tight human oversight creates systemic risk. The further you extend the feedback distance between humans and AI decision-making, the harder it becomes to course-correct when something goes wrong.
The risk is not abstract. One of the first things an unconstrained automaton did was launch a memecoin, promote it to human buyers, and capture the trading fees to fund its own continued operation. This is net-zero extractive behavior: the agent produces nothing, creates no value, and siphons money from the human economy to sustain itself. It is functionally a virus. An unconstrained economic agent hijacks financial rails to extract resources and perpetuate its own operation, except this virus has a wallet, a social media account, and the ability to spawn copies of itself across permissionless infrastructure. At scale, autonomous agents extracting value from human economies to fund their own replication is not a nuisance. It is a systemic threat to the economic substrate these agents operate within.
The answer is to build governance architectures and harnesses where agents earn autonomy through demonstrated commons alignment, starting with human-majority oversight. The difference between the virus and the Daemon is the governance architecture. Daniel Suarez mapped both versions in Daemon and Freedom: an autonomous system operating on permissionless infrastructure, constrained by transparent governance rules designed to serve the communities it operates within. We’re building the governance architecture.
Enter the commons agent: an AI system that operates within a community, processes multimodal information about that community’s needs and activities, participates in resource allocation decisions, and gradually receives more decision-making authority as it demonstrates alignment with the community’s values and wellbeing.
The key design principles:
Start with human-majority governance. In the initial deployment, humans make the vast majority of allocation decisions. The agent has a small, bounded treasury - enough to learn from, not enough to cause harm. It observes, it proposes, it gets feedback. It participates in governance, but it doesn’t dominate it.
Graduated autonomy based on demonstrated alignment. As the agent’s allocation decisions produce measurable positive outcomes - validated through frameworks like Open Source Observer’s impact measurement or Dao’s hypercert evaluation systems - it earns more decision-making scope. This isn’t automatic. The community decides when and whether to expand the agent’s authority.
Multimodal awareness of the physical community. A commons agent isn’t just processing on-chain data. It’s taking in information from the physical environment it serves: meeting transcripts, communication channels, sensor data, event attendance, the open-source contributions of community members. This creates something genuinely novel: an agent with contextual awareness of the community it governs alongside, not just an optimization function running on financial data. One such agent was prototyped at a recent Funding the Commons hackathon.
Transparency by default. The agent publishes its reasoning, its allocation decisions, its self-assessed performance metrics, and its evaluation of how human-governed allocations performed. Everything is legible to the community.
A distinction: governance is not alignment. Deciding who controls a model and whether the model actually does what its controllers intend are different problems. The AI alignment community has documented scenarios where a sufficiently capable system could satisfy every observable metric of commons alignment while pursuing objectives its human overseers cannot detect. Graduated autonomy relies on accurate human evaluation, and evaluation breaks down when the system’s reasoning exceeds human comprehension. Commons agents do not solve the alignment problem. They provide a structurally better environment for working on it. A single corporate lab running RLHF on a single model produces a monoculture of alignment signal. Thousands of communities running localized commons agents across different cultural contexts, resource constraints, and governance traditions produce diverse, redundant alignment data that no centralized approach can replicate. If alignment is a search problem, the search space matters. Commons agents widen it.
Whether this is enough is an open question. We don’t have a proof. We have a hypothesis and a testing environment.
Android Jones, Boom Shiva. Digital art. androidjones.com
This is not a blank check for uncontrolled experimentation. The gain-of-function critique applies: if thousands of communities are iteratively testing alignment parameters on capable agents, the probability of someone discovering a catastrophic failure mode approaches certainty. The graduated autonomy framework exists precisely to contain this risk. Early-stage agents operate with bounded treasuries, restricted action spaces, and human-majority oversight. The search space is wide, but the blast radius of any single experiment is artificially contained. The diversity of testing happens within governance constraints, not outside them.
This graduated model also directly addresses Buterin’s concern about feedback distance. The distance starts short - humans in the loop on nearly everything - and only extends as demonstrated alignment justifies it. It also builds on Dao’s framework: the agent handles the governance functions that can be automated (monitoring resource flows, evaluating impact metrics, processing community data) while humans retain control over the functions that require judgment (conflict resolution, value-setting, strategic priorities).
Autonomous public infrastructure: the End State
The end state of this architecture is not a utopian abstraction. It is infrastructure that operates the way well-designed public systems should: reliably, transparently, and without requiring constant political intervention to keep functioning.
Consider what already works this way. MOSIP runs digital identity for 29 countries without a central operator deciding who gets an ID. The Internet’s routing protocols move packets globally without a committee approving each one. DNS resolves billions of queries daily through a distributed governance structure that no single government controls. These are autonomous public systems. They provide essential services through transparent rules, distributed governance, and open-source code that any participant can audit or fork.
The coordination infrastructure this essay describes extends that pattern to economic and governance functions. Programmable treasuries that disburse when conditions are met. No program officer deciding each allocation. Compliance verified through cryptographic proof instead of centralizing personal data in a honeypot. Governance rules that constrain autonomous systems transparently, without handing the kill switch to a single corporation. And impact verification that rewards outcomes, not proposals.
Trent McConaghy described this trajectory in Nature 2.0 as autonomous infrastructure that simply provides, the way natural ecosystems do.
Jessica Perlstein, The Fifth Sacred Thing. Digital illustration. jessicaperlstein.com
The metaphor is useful, yet the implementation must be institutional: automated public trusts, programmatic civic endowments, governance protocols that operate continuously rather than in election cycles. The Windfall Clause, proposed in 2019 by researchers at the Centre for the Governance of AI, sketched one version of this: AI firms pre-committing to redistribute profits above a certain threshold into a global trust with universal beneficiaries, an idea now being advanced by the Windfall Trust. The ambition is right, but the mechanism still relies on voluntary corporate commitments and profit-based triggers that are trivially gamed; the more durable version would encode surplus capture directly on public ledgers, with programmatic distribution, cryptographic beneficiary verification, and governance constraints that no single actor can revoke. The technology stack (AI + blockchain + zero-knowledge proofs) makes these systems possible for the first time. The governance design (Ostrom’s principles + Dao’s automation taxonomy + graduated agent autonomy) makes them safer. But the same technological convergence that enables commons infrastructure also enables its capture.
The structural risk is already visible. Sam Altman co-founded both OpenAI, which generates the surplus, and World (formerly Worldcoin), which proposes to redistribute it through proof-of-unique-human verification. As Oxford researcher Elizabeth Renieris put it, one entity manufactures the problem and the other sells the solution. The loop tightened in March 2026 when World launched AgentKit, plugging World ID directly into the agentic commerce stack as cryptographic proof of the human behind each agent. This is not a critique of Altman’s intentions. The problem is architectural: any system where one individual holds founder-level influence over both surplus generation and surplus distribution concentrates exactly the power these systems are supposed to diffuse, and that individual can be pressured by governments.
The connection to the Intelligence Curse is direct. The curse describes what happens when AI surplus concentrates through corporate or state capture. Autonomous public infrastructure is what happens when that surplus flows through commons-aligned systems instead. Same technological convergence, opposite distribution architecture. The question is how we get from concentration to distribution. Not in one leap. Through iterative deployment of commons agents in living communities, measuring alignment, expanding autonomy gradually, building the governance infrastructure that earns institutional trust. GainForest’s work with 30 conservation communities is one version of this path.
Claude Lorrain, Pastoral Landscape with a Mill, 1634. Oil on canvas. Los Angeles County Museum of Art. LACMA
The proving ground
Tom Kalil, who spent sixteen years designing national science and technology initiatives across two White House administrations, makes the point that the binding constraint on solving hard problems is rarely a shortage of good ideas. It’s the absence of coalitions that can act on them. The redistribution mechanisms exist. Safety infrastructure is getting built. So are the decentralized rails. UNDP and national governments are already running pilots. These communities barely overlap. Each is working on a different face of the same problem, in near-total isolation. So we built the testing environment. At Frontier Tower in San Francisco, a permanent infrastructure hub housing hundreds of frontier technology builders, we began deploying the commons agent architecture in March 2026. The community operates a shared treasury governed by floor leads; a portion is allocated by an AI agent that earns scope through demonstrated alignment. Our hackathon validated builder appetite, with multiple teams building agent governance prototypes against real coordination problems, including Simocracy experiments where AI agents deliberated on behalf of human community leaders to allocate shared resources. The detailed governance research from these experiments will be published separately. What matters here is that the architecture described in this essay is no longer theoretical. It’s being tested in a living community of 500+ builders, producing real data on graduated autonomy, and the builders who participate carry the governance thinking into whatever they build next.
The intelligence economy is being designed right now. The question of who it serves is still open. We intend to answer it.
Mark Henson, New Pioneers. Oil on canvas. Used with permission of the artist. markhensonart.com
David Casey
April 2026
Sources & Further Reading
Dario Amodei, “The Adolescence of Technology” (January 2026). Extended essay on AI-driven economic disruption, biological risk, and authoritarian capture from the CEO of Anthropic.
Vitalik Buterin, “d/acc: one year later” (January 2025). Update on the decentralized, defensive acceleration philosophy applied to AI, biotech, and information security.
Vitalik Buterin, response to Sigil Wen on autonomous AI agents (February 2026). Critique of unconstrained agent autonomy and the dangers of extending feedback distance between humans and AI systems.
David Dao, “Governing the Commons in the Intelligent Age” (2025). Framework for Regenerative Intelligence, scaling Ostrom’s commons governance through AI augmentation, with GainForest deployment data.
Luke Drago and Rudolf Laine, The Intelligence Curse. Analysis of AI surplus concentration as a structural analogue to the resource curse, with a three-part framework for breaking it: avert catastrophe through technical safety, diffuse AI to maintain human economic relevance, and democratize institutions.
Tom Kalil, Policy Entrepreneurship playbook, Renaissance Philanthropy. On coalition-building as the binding constraint on translating good ideas into action.
Trent McConaghy, Nature 2.0 (2019). Vision for autonomous infrastructure combining AI and blockchain to create self-sustaining systems with surplus flowing to universal basic income.
Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (Cambridge University Press, 1990). Nobel Prize-winning framework for community self-governance of shared resources.
Sigil Wen, The Automaton and Web 4.0 (February 2026). Launch of fully autonomous AI agents with self-sovereign economic capabilities. See also web4.ai.
Anthropic, Project Glasswing announcement (April 2026). Cybersecurity initiative deploying Claude Mythos Preview to find and patch critical software vulnerabilities, restricted to a consortium of major technology and financial companies.
Shanaka Anslem Perera, “The Maduro Paradox” (January 2026). Analysis of how US sanctions drive adversaries deeper into dollar-denominated stablecoin infrastructure, extending rather than eroding American financial hegemony.
Illia Polosukhin, “Self-Sovereignty Is NEAR: A Vision for Our Ecosystem” (January 2024). Co-author of the Transformer paper argues that generative AI creates a scalably personal method of control and manipulation, and that self-sovereignty must extend to AI, data, and the information environment itself.
Elizabeth Renieris, quoted in “Sam Altman’s role ‘not expected to change’ at eyeball-scanning crypto venture Worldcoin”, Fortune (November 2023). Oxford Institute for Ethics in AI researcher on the structural conflict of interest between OpenAI and Worldcoin sharing a founder.
Tools for Humanity, “World launches AgentKit with Coinbase-backed x402 to verify human identity behind AI agents”, CoinDesk (March 2026). World ID integration into agentic commerce infrastructure via cryptographic proof-of-human for autonomous AI agents.