Alchemy

Part VII · The Distribution Question

VII.C — What Institutions Require

16 min read · 3,184 words

The distribution of costs described in the previous section is not inevitable. It depends on institutional choices that remain open. Liability frameworks, fiscal structures, competition policy, and international coordination will determine whether the transition’s gains are broadly shared or narrowly captured. The production function determines the pressure; institutions determine the response.

The difficulty is that existing institutions were built for a different economy. They assume human actors with continuous identity, transactions that occur within jurisdictions, firms that employ workers, and income that flows primarily through wages. Each assumption is under pressure from the dynamics the framework describes. The question is not whether institutions will adapt but whether they will adapt fast enough, and in what direction.


Begin with liability.

The problem is structural, not merely legal. Liability presupposes a responsible party: an entity that persists through time, possesses assets that can be seized, and can be compelled to answer for its actions. An agent invocation satisfies none of these criteria. The process that caused harm has already terminated by the time consequences manifest. No assets belong to the invocation itself. The configuration that produced the harm can be re-instantiated, but the resulting process has no memory of the events in question and no necessary connection to the hardware or weights that generated the original output.

This is not a gap in current law awaiting legislative repair. It is a structural feature of what agents are. The ontology does not support direct liability. Legislative frameworks can create new categories of responsibility—for principals who deploy agents, for platforms that host them, for developers who train the underlying models—but these frameworks operate by tracing accountability through the invocation chain to parties who do persist.

The tracing creates a choice with significant consequences. The choice is where in the chain liability concentrates.

Concentrating liability on principals creates incentives for careful deployment but constrains adoption. If the firm that deploys an agent bears strict liability for its outputs, firms will deploy cautiously—extensive testing, narrow use cases, human review of high-stakes decisions. Deployment slows. The transition extends. Workers in high-V/C domains gain adjustment time. The cost is foregone productivity: beneficial deployments that would clear the hurdle rate are not undertaken because the liability exposure exceeds the expected margin.

Concentrating liability on platforms creates different incentives. If the cloud provider or API host bears liability for agent outputs, platforms will restrict access, impose usage constraints, and build monitoring infrastructure. The restriction may be blunt: entire use cases prohibited regardless of the deploying firm’s competence. Small deployers face higher barriers than large deployers who can negotiate bespoke terms. Concentration accelerates because liability capacity becomes a barrier to entry.

Concentrating liability on model developers creates yet another set of incentives. If the firm that trained the model bears liability for downstream harms, model developers will restrict access, impose licensing terms, and internalize deployment. The open-weight ecosystem contracts. Frontier capability concentrates among firms with sufficient liability capacity. The commoditization thesis weakens because liability creates a moat that capability alone cannot provide.

Each allocation has distributional consequences. Principal liability favors large deployers with legal departments and insurance capacity. Platform liability favors incumbents with existing infrastructure and regulatory relationships. Developer liability favors well-capitalized frontier labs over open-source alternatives. The choice among them is not merely technical; it determines who can participate in the transition and who captures the resulting surplus.

The deeper problem is that no allocation cleanly addresses the harm. If an agent provides incorrect medical advice and a patient suffers, where does fault lie? The principal who deployed the agent without adequate safeguards? The platform that permitted medical applications without verification? The developer who trained on data that included medical misinformation? The patient who relied on algorithmic output for a high-stakes decision? Each party contributed to the outcome; no single party caused it. The causal chain is diffuse, the fault is shared, and traditional liability doctrine—designed for cases where causation is traceable to a responsible party—struggles to allocate responsibility.

The likely outcome is jurisdictional experimentation. Different legal systems will try different allocations. The United States will likely favor principal liability with contractual allocation of risk—the deployer assumes responsibility, subject to whatever terms the platform and developer impose. The European Union will likely favor broader platform and developer responsibility, consistent with its approach to digital services regulation. China will likely favor state direction of deployment with liability concentrated in designated entities. Each choice creates different deployment incentives, different rates of adoption, and different distributions of harm.


The fiscal problem is equally structural.

Social insurance in most developed economies is funded primarily through payroll taxes. In the United States, Social Security and Medicare together consume approximately 15% of wages, split between employer and employee contributions. Unemployment insurance adds additional levies. The revenue base is labor income; the expenditure target is workers who face displacement or retirement.

If labor income declines as a share of total economic output, the revenue base erodes. If displacement accelerates, the expenditure need rises. The mismatch widens from both directions simultaneously.

The arithmetic is straightforward. Suppose labor share declines from 58% to 50% of national income over a transition period—a shift comparable in magnitude to the decline observed since 1970, compressed into perhaps two decades rather than five. Payroll tax revenue, holding rates constant, declines proportionally. But the displaced workers require support: unemployment benefits, retraining subsidies, healthcare coverage, and eventually retirement income. The expenditure need does not decline with the revenue base; it may increase.

The conventional response is to raise rates on the shrinking base or to broaden the base to include non-labor income. Raising rates on labor income accelerates the incentive to substitute capital for labor—precisely the dynamic driving the transition. The tax creates a wedge between the cost of human workers and the cost of agent invocations; widening the wedge accelerates displacement.

Broadening the base to include capital income faces different constraints. Capital is mobile; labor is not. A jurisdiction that taxes capital income at rates sufficient to fund social insurance will see capital migrate to jurisdictions that do not. The mobility constraint binds tighter for the forms of capital most relevant to the transition: intellectual property can be domiciled anywhere; compute capacity can be located wherever power is cheap and regulation is permissive; ownership of infrastructure funds can be structured through whatever jurisdiction offers the most favorable treatment.

The result is a race to the bottom on capital taxation and a race to the top on labor taxation—the opposite of what the transition requires. Jurisdictions that break from this pattern face capital flight; jurisdictions that follow it face fiscal crisis.

Several alternatives have been proposed, each with significant limitations.

A robot tax—levying charges on automation or agent deployment specifically—targets the substitution directly. But defining the tax base is difficult. What counts as automation? An agent invocation is ephemeral; a query to a language model does not obviously constitute a taxable event. If the tax attaches to compute consumption, it falls on all computation, not just automation that displaces labor. If it attaches to displacement directly, measurement becomes impossible—how does one determine whether a task would have been performed by a human absent the agent?

A value-added tax with revenues earmarked for social insurance avoids the labor-versus-capital distinction by taxing consumption regardless of how production occurred. The base is broad and the rate can be low. But VAT is regressive—it falls more heavily on lower-income households who consume a larger share of their income—and earmarking creates rigidity that may not match expenditure needs.

Universal basic income funded from general revenue addresses the distribution problem directly by providing transfers independent of employment status. But the fiscal burden is substantial—a UBI sufficient to replace lost labor income would require revenue roughly equivalent to current social insurance expenditure, on top of other government functions—and the political economy is unfavorable. UBI concentrates benefits on those who do not work while dispersing costs across all taxpayers; the coalition against it is larger than the coalition for it.

Ownership stakes in the actuation layer represent a different approach. If the transition concentrates returns in infrastructure—utilities, datacenters, fabrication facilities—then distributing ownership of that infrastructure distributes the returns. Sovereign wealth funds, social security trust funds, and public pension systems could take equity positions in the actuation layer, capturing returns that would otherwise accrue to private capital. Norway’s Government Pension Fund provides a model: resource wealth is converted to diversified ownership, and returns fund social expenditure.

The difficulty is path dependence. The actuation layer is already owned—by utilities, infrastructure funds, and private equity. Acquiring ownership stakes requires either paying market prices (which capitalizes expected returns into the purchase price, eliminating the distributional benefit) or expropriating existing owners (which destroys the investment incentives that created the infrastructure). The Norwegian model worked because the resource—North Sea oil—was discovered after the institutional framework was established. The AI infrastructure is already built, and its owners will not cede returns willingly.


Competition policy faces a parallel challenge: the framework suggests the bottleneck is migrating from the cognitive layer to the actuation layer, but current antitrust attention focuses on the cognitive layer.

The intuition driving current scrutiny is understandable. The largest foundation model companies command extraordinary valuations. OpenAI, Anthropic, and their competitors have raised billions of dollars on the promise of capturing the AI transition. The hyperscalers—Microsoft, Google, Amazon, Meta—have invested heavily in both frontier models and the infrastructure to deploy them. The scale of investment and the network effects in AI suggest the potential for durable market power.

But the framework developed in prior sections suggests that cognitive capability commoditizes. Open-weight models compress the gap between frontier and commodity. Training techniques diffuse. Inference costs fall as competition intensifies. If the commoditization thesis is correct, the cognitive layer will not sustain durable market power. The margin pool shifts downstream, to the actuation layer.

The actuation layer exhibits different characteristics. High fixed costs create barriers to entry. Long build times create temporal moats. Regulatory requirements create permission barriers. Physical infrastructure cannot be replicated by training a larger model. These are the conditions that tend toward persistent concentration—and persistent concentration is what antitrust policy exists to address.

Consider the specific bottlenecks.

Power generation and transmission. A small number of firms control grid-scale generation capacity. Transmission constraints determine which generation can reach which loads. The physics of the grid create natural monopoly characteristics in transmission; the capital intensity of generation creates oligopoly in supply. If AI infrastructure requires power at scale, the firms that control power supply capture scarcity rents.

Chip fabrication. A smaller number of firms control advanced semiconductor manufacturing. TSMC alone produces the majority of leading-edge logic chips. The capital requirements for a leading-edge fab exceed $20 billion; the technical expertise required is concentrated in a small number of organizations. If AI capability requires leading-edge silicon, the firms that control fabrication capture scarcity rents.

Datacenter capacity. A handful of hyperscalers control the majority of cloud compute capacity. The fixed costs of datacenter construction, the complexity of operations at scale, and the integration with software platforms create barriers that smaller entrants cannot easily surmount. If AI deployment requires cloud capacity, the firms that control datacenters capture scarcity rents.

Each of these concentrations exists independently of the AI transition. The transition intensifies the concentration by increasing demand for assets that are already scarce. The scarcity rents that accrue to bottleneck owners are rents that cognitive providers cannot bid away—regardless of how good their models become.

The antitrust question is whether this concentration is harmful and, if so, what remedies are appropriate.

The case for concern: persistent concentration in the actuation layer allows bottleneck owners to extract rents from the entire value chain. The cognitive layer, however competitive, cannot capture the full value of its capabilities if actuation owners extract margin through access pricing. The dynamic resembles the relationship between content creators and platform owners in the internet era: creators compete vigorously while platforms capture disproportionate returns.

The case against concern: concentration in the actuation layer may reflect efficient scale rather than anticompetitive conduct. Building grid-scale power generation requires capital concentration. Operating leading-edge fabs requires technical concentration. Running hyperscale datacenters requires operational concentration. If the concentration reflects real economies of scale and scope, breaking it up would reduce efficiency without reducing prices.

The resolution depends on whether the concentrated positions are contestable. If entry is possible—if new generation can be built, new fabs can be constructed, new datacenters can be deployed—then concentration may be transitory. The rents attract entry; entry dissipates the rents; the market equilibrates. If entry is blocked—by permitting constraints, by technological moats, by regulatory capture—then concentration persists and rents accumulate.

The policy implication: antitrust scrutiny should extend to the actuation layer, not just the cognitive layer. Vertical integration by frontier model companies into power generation, datacenter operation, and chip design may represent efficiency gains—but may also represent foreclosure strategies that extend cognitive-layer advantages into the actuation layer. The default assumption should be skepticism toward vertical integration when the actuation layer exhibits natural monopoly characteristics.


International coordination faces the hardest version of the problem.

The transition is global; governance is national. Compute can be located wherever power is cheap and regulation is permissive. Models can be trained in one jurisdiction and deployed in another. Agents can operate across borders without the frictions that constrain human economic activity. The jurisdictional mismatch creates arbitrage opportunities that no single jurisdiction can close.

The arbitrage has several dimensions.

Regulatory arbitrage. Jurisdictions with permissive liability frameworks attract deployment; jurisdictions with restrictive frameworks lose activity. The arbitrage pressure pushes toward permissive standards—a race to the bottom in which the most favorable jurisdiction sets the effective global standard.

Talent arbitrage. AI development requires specialized talent. Talent concentrates in locations with strong research ecosystems, high compensation, and favorable immigration policies. The United States currently dominates; China competes intensively; the European Union lags. The concentration of talent creates concentration of capability, which creates concentration of economic and strategic advantage.

Infrastructure arbitrage. Compute requires power; power has geographic signatures. Locations with cheap, reliable power attract infrastructure; locations without it lose the race. The geographic concentration creates strategic dependencies.

Data arbitrage. Training requires data; data has privacy implications. Jurisdictions with permissive data access attract training; jurisdictions with restrictive access—notably the European Union under GDPR—may find their data cannot be used for frontier model development.

The coordination mechanisms that exist for other global challenges—trade agreements, tax treaties, environmental accords—are poorly suited to the Factor Prime transition. Trade agreements address flows of goods and services across borders; they do not address computation that occurs in one jurisdiction and produces effects in another without any physical crossing. Tax treaties address income allocation among sovereigns; they do not address value creation by stateless processes that lack jurisdiction of residence.

The likely outcome is fragmentation rather than coordination. The United States, European Union, and China will pursue different regulatory paths, driven by different political economies and different strategic calculations.

The United States will likely favor rapid deployment with limited regulatory constraint. The political economy favors incumbent technology firms; the strategic calculation favors maintaining leadership in AI capability; the legal tradition favors ex post liability over ex ante regulation. Deployment will proceed quickly; harms will be addressed through litigation; adjustment will occur through market mechanisms rather than government direction.

The European Union will likely favor cautious deployment with extensive regulatory constraint. The political economy favors incumbent industries threatened by AI; the strategic calculation favors protecting European firms from American competition; the legal tradition favors ex ante regulation over ex post liability. Deployment will proceed slowly; harms will be prevented through restriction; adjustment will occur through social programs rather than market mechanisms.

China will likely favor state-directed deployment with strategic control. The political economy favors national champions designated by the state; the strategic calculation favors AI capability as an instrument of national power; the legal tradition subordinates private interests to state direction. Deployment will proceed where the state directs; harms will be absorbed by the state apparatus; adjustment will occur through administrative allocation rather than market or social mechanisms.

Each path has consequences. The American path produces rapid capability improvement and deployment, but also rapid displacement and potentially severe harms before liability can catch up. The European path produces slower capability development and deployment, but also more orderly adjustment and fewer severe harms—at the cost of foregone productivity and competitive position. The Chinese path produces strategic concentration of capability, but also political control over deployment that may prevent beneficial applications while enabling harmful ones.

The fragmentation creates strategic risk. If capability concentrates in jurisdictions whose values or interests diverge from others, the concentrated capability becomes a source of leverage. The jurisdiction that hosts the most advanced AI infrastructure can condition access on compliance with its preferences.


The production function is changing faster than institutions can adapt. Liability doctrine requires years of litigation to establish precedent; the capability frontier advances monthly. Fiscal systems require legislative action to reform; revenue erosion proceeds automatically as labor share declines. Competition policy requires enforcement action to address concentration; concentration proceeds through investment and construction that occurs faster than regulatory review.

The adaptation that does occur will be reactive rather than proactive. Harms will manifest before liability frameworks address them. Fiscal stress will emerge before tax systems adapt. Concentration will establish before competition policy responds. The pattern is not new—institutions always lag transitions—but the speed differential may be larger than in prior transitions.

The implication for positioning is that institutional friction creates opportunity. Where regulation is slow, early movers capture rents before rules constrain them. Where liability is unclear, deployment proceeds until precedent emerges. Where fiscal systems are stressed, capital taxation remains favorable until crisis forces reform. The arbitrage is temporary—institutions eventually adapt—but temporary can mean years or decades, and years or decades of arbitrage opportunity can compound into durable advantage.

The implication for welfare is less favorable. Institutional lag means harms accumulate before remedies arrive. Workers are displaced before adjustment programs scale. Concentration establishes before competition policy responds. The costs of the transition fall on those least equipped to bear them, because the institutions designed to redistribute those costs adapt more slowly than the production function that generates them.

The transition creates surplus. The distribution of that surplus depends on institutions. The institutions are adapting, but probably not fast enough. The gap between the pace of transition and the pace of adaptation is where both opportunity and harm concentrate.

The Epilogue that follows steps back from investment and policy to consider what the transition means for the human relationship to work, value, and economic coordination. The production function has changed. The institutions will eventually follow. The question that remains is what kind of economy—and what kind of society—emerges on the other side.