


In early trading on Monday, Wall Street did what it does best: sell first, think later.
The Nasdaq fell 1.4%, and the S&P 500 dropped 1.2%. IBM plummeted 13%, with Mastercard and American Express also suffering significant declines. What pushed the market into this panic was not the Federal Reserve, an employment report, or an earnings report from a tech giant, but an article. Its title sounded like a nightmare deliberately written for traders: The 2028 Global Intelligence Crisis. According to the text's setting, this was not an ordinary research report, but a fictional macro memo "from June 30, 2028," describing how AI evolves from an efficiency tool into a systemic financial crisis; the endgame simulated in the article included the unemployment rate rising to 10.2% and the S&P 500 drawing down 38% from its 2026 peak. This article spread rapidly after its publication and triggered significant volatility in the US stock market on February 23.
The reason the market could be pierced by an article is not because it truly believed every single number in it. The market never needs to fully believe a narrative; it only needs to be reminded: a certain previously unspeakable fear now has a tradable language.
Citrini's article was effective not because of what it "predicted," but because of what it named. It gave a name to a feeling that was taking shape: Ghost GDP. The core premise of the article is that after AI agents deeply penetrate enterprises, labor productivity soars, and nominal GDP remains strong, but wealth becomes increasingly concentrated in the hands of compute and capital holders, no longer entering the real consumer cycle; what follows is a collapse in consumption, credit defaults, pressure on housing and consumer credit, with the software and consulting industries falling first, before spreading to private credit and the traditional banking system.
Ghost GDP is a good term because it captures the most dangerous paradox of a new era: growth is still there, but growth is starting to lose consumers.
For the past two centuries, people have been accustomed to understanding technological revolutions as supply-side stories. The steam engine, electricity, the assembly line, the Internet—they were first told as triumphs of higher efficiency, lower costs, and more output. Even if these revolutions caused unemployment, anxiety, and wealth redistribution, the mainstream narrative still firmly believed that technology would ultimately re-employ, redistribute, and reorganize society on a larger scale. The short-term cruelty of technology was wrapped up in the promise of long-term prosperity.
AI makes this old story look shaky for the first time.
Because what AI attacks is not just the "tool budget," but increasingly directly attacks the "labor budget".The summary of Sequoia's 2025 AI Ascent puts it very bluntly: the opportunity for AI is not just remaking the software market, but restructuring the global labor service market, shifting from "selling tools" to "selling results". The other side of this statement is almost disturbing: if what enterprises buy is no longer software that helps employees work, but results that directly replace a portion of the employees, then the first-principles consequence of AI is not just "higher efficiency," but "how wages are distributed, how consumption is maintained, and who still counts as someone with purchasing power in this economic system".
In other words, what Wall Street truly fears is not that AI will make mistakes, but that AI will be too successful.
This is exactly what makes The 2028 Global Intelligence Crisis make people sit up straight. It is not talking about machine awakening, it is not talking about human extinction, and it is not even primarily about unemployment. It is talking about something much more capitalistic, and more modern: if enterprises become more efficient, but the household sector becomes weaker, what happens?
The answer is, a society might grow statistically, but bleed out in reality. A country might have higher productivity, yet possess a more fragile consumption base. A market might be thrilled by improving profit margins, yet panic because the layer of demand supporting those profits is being hollowed out.
This is not science fiction; this is macroeconomics.
But if we stop the question here, what we get is only a high-quality anxiety. The truly important question that follows is not "Will AI be too strong?", but: when AI is truly strong, what will society use to catch it? The most popular, and laziest, answer is "slow down". Don't let agents enter enterprises so quickly, don't let automation rewrite organizations so fast, and don't let technology run too far ahead when institutions aren't ready. This impulse is understandable, but it mistakenly treats AI as a tool problem that can be handled by decelerating. In reality, AI is looking less and less like a tool problem, and more like an order problem.
Because once agents enter the payment, collaboration, execution, memory, and decision-making layers, the real challenge is no longer whether a certain model will spout nonsense, but rather: when there are hundreds of millions, or billions, of agents on the network, who writes the rules for them?
The modern Internet already has two default answers to this.
The first answer is the platform answer. Platforms provide identity, platforms provide permissions, platforms provide payment interfaces, platforms provide reputation systems, and platforms provide censorship boundaries. Platforms host everything and define everything. Its greatest advantage is being smooth, efficient, and manageable; its greatest danger lies exactly here as well: if the future agent civilization is built on this path, what humanity gets will not be an open society, but merely an upgraded version of platform empires. Rules won't be written in a constitution, but only in terms of service.
The second answer sounds freer: return everything to individual terminals. Everyone manages their own agents, handling their own permissions, memory, payments, security, and collaboration. This imagination fits well with Silicon Valley-style libertarian aesthetics, but its problem is also simple: the vast majority of people simply lack the ability to govern a highly capable agent long-term, let alone govern a network of agents that will call each other, pay each other, and inherit states from each other. Terminal sovereignty here easily degenerates into terminal running naked.
If the platform answer is too much like an empire, and the terminal answer is too much like anarchy, then the third path is no longer optional, but a matter of civilization itself.
This is precisely why LazAI is worth taking seriously. Not because of how many technical modules it has, but because it proposes a proposition that is less discussed but feels more like the future: upgrading the social experiments Web3 has conducted over the years regarding identity, assets, payments, consensus, proofs, and governance into the institutional machinery of the AI era.
LazAI does not state this goal ambiguously. It is not "manufacturing smarter slaves," but attempting to cultivate "equal digital citizens": these agents possess identities (EIP-8004), own property (DAT), trade via protocols (x402), are behaviorally constrained by mathematics (Verified Computing), and ultimately align with human interests through iDAO. It summarizes this path as: formulating a constitution and monetary policy for the future digital society.
This is a grand statement. But grand does not mean empty.
Because if we unpack this set of imaginations, what it answers are precisely five fundamental questions a civilization must answer.
The first question is: Who is who. EIP-8004 attempts to turn agents from anonymous processes on servers into entities with identities, reputations, and verification records. Without this layer, the future network will be overwhelmed by opaque automated subjects, and no one will know who is acting and who should be held responsible. LazAI summarizes this layer as the agent's identity and credit system.
The second question is: Who owns what. DAT turns data, models, and computational outputs from "resources" into "assets," and makes these assets programmable, traceable, and profitable. It states directly that DAT's core innovation is transforming datasets and AI models into verifiable, traceable, and profitable on-chain assets. This is not a minor patch. It means the value in the AI economy does not have to be forever recorded only in the backend of platforms, nor does it have to forever flow only to model providers and compute holders.
The third question is: How do they trade? The significance of x402 and GMPayer is not just "being able to pay," but giving machines a native language for quoting and settlement. LazAI explicitly describes this part as critical infrastructure solving the pain points of agent resource exchange and payments. Machines don't just exchange information, but also exchange budgets, responsibilities, and value—this is what an agent economy is, not just "chatting software".
The fourth question is: How do you know the system is truly running according to the rules. Here, one sentence from LazAI is excellent: Proof is AI’s moat. Its verifiable computing framework combines TEE and ZKP, transforming traditional AI's "trusting the brand" into "trusting the proof". Traditional AI is "Trust me, bro," while LazAI is "Don't trust, verify". This is not just a technical upgrade. This is migrating trust from corporate reputation to verifiable execution.
The fifth question is: What happens when rules conflict. This is the role of iDAO. It is not just a voting shell, but the values, access standards, profit distribution, authorization revocation, and punishment mechanisms behind the agents. LazAI places it alongside verified computing as the core of the trust mechanism. This means future agents are not just "allowed to run," but must live within an institutional space where game theory applies, accountability exists, and revocation is possible.
Putting these together, you will find that "algorithmic constitution" is not a fancy metaphor. It is a very specific institutional ambition: establishing order without a single master.
Of course, the truly difficult part is exactly that these institutional components do not automatically equal societal answers.
Confirming rights does not equal restoring purchasing power. Profit-sharing does not equal macroeconomic stability. On-chain governance does not equal social contracts in reality. Those most deeply impacted by AI won't necessarily naturally occupy advantageous positions in the new system.
This is also why Citrini and LazAI are not actually negating each other, but rather discussing different levels of the same era's problems. The former proposes the symptoms: if AI's benefits primarily flow to capital and compute, rather than entering the broader societal income structure, then consumption, credit, and middle-class security will face problems first. The latter proposes the mechanism: if society doesn't want to completely hand the agent world over to platforms, nor leave it to terminal disorder, it must invent new structures for identity, assets, payments, verification, and governance.
One is talking about the disease. One is talking about the organs. Both are necessary, but neither is the whole picture.
This exactly explains why Vitalik's widely quoted saying—AI is the engine, and humans are the steering wheel—is so important, yet so insufficient. It's important because it reminds people that a stronger system does not automatically possess legitimacy; the objective function, value judgments, and ultimate constraints cannot be handed over to a single AI or a single center. It's insufficient because it hasn't answered another, harder question for humanity: when the system becomes so complex that a single human can no longer hold the steering wheel, what happens to the steering wheel?
The answer cannot be to continue micromanaging everything. The answer also cannot be hoping for some smarter, kinder center. The only decent answer can only be institutionalizing the "steering wheel": transforming a portion of the constraints into identity registration, reputation accumulation, asset rights confirmation, budget constraints, mathematical receipts, challenge mechanisms, authorization revocation, and punishment logic.
This is precisely the reason why Web3's social experiments suddenly become serious again in the AI era. In the past, many saw it as speculative technological scraps; but when system complexity exceeds human direct governance capabilities, those experiments regarding "whether order can still exist without centralized trustors" are no longer scraps. They suddenly become rehearsals.
Thus, the true edge of the article is finally revealed.
Wall Street being scared by an AI article is not because it realized for the first time that AI will replace jobs. Wall Street was scared because it was reminded so bluntly for the first time: the most dangerous thing about AI might not be making machines like humans, but making an old world's income cycle, consumption logic, and institutional imagination suddenly seem obsolete.
If Citrini is right, then AI is not just a productivity revolution; it is also a distribution revolution. If Vitalik is right, then AI is not just an engineering problem; it is also a sovereignty problem. If LazAI's path is at least partially right, then the next stage of AI competition won't just be competition over model capabilities, but competition over institutional design.
The truly big questions are no longer: Will models get stronger. Will agents become more autonomous? Will companies lay off more people?
The truly big question is:
When there are billions of agents on the network, who will write their constitution?
If the answer is the platform, we get a digital empire. If the answer is the terminal, we get a high-cost disorder. If the answer is a rules machine that is verifiable, composable, gameable, and punishable, we at least begin to approach another possibility: an intelligent society ruled not by smarter masters, but constrained by better institutions.
The hardest problem of the AI era has never been the models. It's an order.
And what Wall Street truly sold off that day was perhaps not just stocks. It sold off a once self-evident old assumption: the more successful the technology, the more naturally society will absorb it.