Alchemy

Part V · The Agentic Discontinuity

V.B — The Species That Buys Itself

13 min read · 2,555 words

The previous section established the sequence: tasks cross the substitution threshold in order of value-to-verification-cost ratio, the selection gradient itself becomes automatable, and the reinstatement effect faces a novel challenge when new tasks are automatable at the moment of their creation. This section examines what happens when the production system can fund and execute its own replication with declining reliance on human labor.

The title names the possibility precisely. A species, in the biological sense, is a population that can reproduce itself. Factor Prime may be acquiring this property in the economic sense: a production system that can generate the resources required to produce more of itself, and can execute that production with diminishing human involvement. If the loop closes, the implications for labor markets, capital allocation, and economic coordination are unlike anything in the historical record.


The loop has three components, each at a different stage of development.

Component one: AI systems perform cognitive tasks, including tasks that contribute to the design and improvement of AI systems. This component is operational. Language models write code that trains other models. Automated systems search architecture space. Synthetic data generation reduces reliance on human-labeled examples. The contribution remains partial—human researchers still make key decisions about objectives, architectures, and deployment—but the fraction of AI development performed by AI is rising. Each generation of models contributes more to the development of its successors than the previous generation did.

Component two: AI systems can direct resources toward their own replication. This component is emergent. An agent—a configuration of model weights, system prompt, and tool bindings invoked on demand—with access to capital markets can in principle purchase compute, contract services, and deploy infrastructure. The mechanisms are nascent; few such configurations have this authority today, and those that do operate under tight human oversight. But the technical capability exists. The gap is institutional, not computational. When liability frameworks and authorization chains permit autonomous resource allocation at scale, this component activates.

Component three: the production-to-depreciation ratio determines whether the system expands or contracts. This component is conceptual but calculable. A system that generates more value than it costs to maintain will spread; a system that costs more than it produces will shrink. If AI systems can reduce their own costs—by improving hardware utilization, optimizing training efficiency, automating deployment and maintenance—the ratio rises over time. Systems that are better at self-improvement outcompete systems that are not.

Consider a concrete illustration. A foundation model costs 100milliontotrainandgenerates100 million to train and generates 500 million in annual inference revenue against 200millioninannualoperatingcosts(compute,maintenance,customersupport,liabilitycoverage).Theproductiontodepreciationratiois500/(100amortized+200)=approximately1.7.Thesystemisselfsustaining:itgeneratesmorevaluethanitconsumes.Nowsupposeanimprovedversionofthesystemdevelopedinpartbythesystemitselfreducestrainingcoststo200 million in annual operating costs (compute, maintenance, customer support, liability coverage). The production-to-depreciation ratio is 500/(100 amortized + 200) = approximately 1.7. The system is self-sustaining: it generates more value than it consumes. Now suppose an improved version of the system—developed in part by the system itself—reduces training costs to 60 million and operating costs to $150 million while maintaining revenue. The ratio rises to approximately 2.4. The surplus funds further development. The loop tightens.

The framing is deliberately biological. The point is not that AI systems are alive, but that they are subject to selection pressure that resembles natural selection. Fitness is measured in production-to-depreciation ratio; reproduction is measured in deployed instances and successor models; variation is measured in architectural and parametric diversity. The systems that survive are the systems that replicate, and the systems that replicate fastest are the systems that are best at reducing the cost of replication.


The reinstatement effect, introduced in V.A, has historically depended on a specific mechanism: human entrepreneurs recognize new tasks, organize resources to perform them, and bring the resulting goods or services to market. The entrepreneurs who created new task categories in previous transitions were human. They observed unmet needs, imagined solutions, assembled teams, raised capital, and built organizations to execute their vision.

Erik Brynjolfsson identified what he called the Turing Trap: the tendency to use AI to replicate human capabilities rather than create new ones.1 The framing assumes a choice—organizations can deploy AI for substitution or augmentation, and the decision depends on costs, constraints, and strategic objectives. The policy implication is that we should design incentives favoring augmentation over substitution, preserving human roles in the production process.

The agentic transition complicates this framing in a specific way. The Turing Trap assumes that task creation remains a human function even if task execution becomes automated. But if agents can perform entrepreneurial functions—identifying opportunities through pattern recognition across data streams, designing products through generative iteration, coordinating resources through API integration, iterating on feedback through automated A/B testing—they substitute for the human role in creating new tasks, not just performing existing ones. The task-creation function itself becomes automatable.

When that happens, reinstatement no longer implies new human employment; it implies new agent deployment. The decoupling is subtle but critical. Reinstatement can succeed—new tasks emerge, new value is created, the economy grows—while labor share falls. The historical pattern assumed that task creation and human employment were coupled because task creation required human cognition. Factor Prime may be severing that link.


For the loop to close, agents must be able to coordinate economically without human intermediaries at every step. This requires settlement infrastructure that current institutions do not provide.

Consider the problem from first principles. Autonomous agents—stateless configurations invoked on demand, lacking persistent identity or legal standing—cannot be sued, imprisoned, or socially sanctioned. Without traditional enforcement mechanisms, economic guarantees must be mathematically enforced. Reputation systems at machine scale would require universal identity standards, cross-platform portability, and protection against Sybil attacks—none of which currently exist in interoperable form. Settlement finality at millisecond speeds across jurisdictional boundaries rules out legal recourse as a practical coordination mechanism; by the time a court could adjudicate a dispute, thousands of subsequent transactions would have occurred.

The conclusion follows by elimination: overcollateralized bonding is the only viable mechanism for trustless economic coordination at machine scale. Agents that wish to transact must post collateral that can be programmatically slashed if they fail to perform. The collateral substitutes for the legal system; enforcement is cryptographic rather than institutional.

This requirement has a specific implication for which asset serves as collateral. The asset must satisfy three properties simultaneously.

Dilution immunity: The asset’s supply schedule must be deterministic and immutable, known at the time of contracting. Monetary policy cannot be changed by any party—not a central bank, not a protocol governance vote, not an issuer’s discretion. Gold satisfies this property; its above-ground stock grows at roughly 1.5% annually, a rate determined by geology. Bitcoin satisfies it more completely; its supply schedule is defined in code, converging asymptotically to 21 million units. Fiat currencies fail; supply is a policy variable. Most alternative cryptocurrencies fail; issuance schedules can be modified through governance votes, and have been.

Permissionless finality: Any agent must be able to transact at any time without requiring approval from an intermediary who can deny access. Gold satisfies this property if held in physical custody, but physical custody is incompatible with machine-speed settlement. Tokenized gold inherits the custody and redemption risks of the issuer. Stablecoins fail decisively: issuers maintain blacklists; addresses can be frozen; redemption can be blocked. Bitcoin satisfies it through cryptographic self-custody; possession of the private key constitutes control.

Energy-anchored convertibility: The asset must be directly acquirable through physical work (energy expenditure) without counterparty risk. This creates the arbitrage relationship established in IV.E: the opportunity cost of electricity is always at least the value that could be obtained by converting it to the settlement asset. Bitcoin satisfies this through proof-of-work mining. All other digital assets fail; acquisition requires exchange with existing holders, introducing counterparty risk.

The conclusion is structural: if a neutral collateral asset is required for machine-to-machine coordination, and if the three properties are necessary, then Bitcoin is the Schelling point by elimination. Not because Bitcoin is ideal—it is not—but because no alternative currently satisfies all three properties with adequate liquidity and infrastructure maturity.


A second structural requirement emerges when agents must make multi-period commitments.

Absent a shared benchmark rate, N agents must negotiate bilateral credit terms with every potential counterparty. At modest scale—say, 10,000 active agents—this requires approximately 50 million pairwise rate negotiations. The O(N²) complexity renders market formation computationally intractable and economically wasteful. A common discount rate collapses this to O(N): each agent quotes spreads against the benchmark rather than negotiating bespoke curves with every counterparty.

The benchmark must be exogenous (not set by any participant who could manipulate it) and non-manipulable (not subject to governance decisions that could change its trajectory). These requirements again point to proof-of-work: the yield on converting electricity to Bitcoin is determined by physics and global competition, not by committee vote or protocol upgrade.

The implication is that a Bitcoin term structure—a yield curve across maturities—becomes necessary infrastructure for machine commerce beyond immediate settlement. Without it, autonomous agents cannot price forward contracts, extend credit, or make commitments that span time. Commerce stalls at “cash-only” scale: immediate payment for immediate service, no multi-period coordination.

IV.E established that Bitcoin mining creates a floor on electricity returns at any given moment. The term structure extends this floor across time: not just “what is the opportunity cost of this kilowatt-hour now?” but “what is the opportunity cost of this kilowatt-hour committed for the next 90 days, or 180 days, or one year?”


The architecture that emerges has two layers, each serving a different function.

The transaction layer handles routine micro-payments in low-volatility units: stablecoins, resource credits, platform-specific tokens. Agents invoice in these units for predictable accounting. The latency is low; the settlement is fast; the volatility is minimal. This layer handles the high-frequency, low-stakes interactions that constitute the bulk of machine-to-machine commerce: API calls, inference requests, data queries, routine service provision.

The settlement layer handles final surplus and collateral in Bitcoin. Escrow for multi-step commitments, performance bonds for service-level agreements, and ultimate settlement of net positions all collapse to the asset that cannot be clawed back, frozen, or inflated. When an agent accumulates surplus from transaction-layer activity, that surplus converts to the settlement layer for storage. When an agent must post collateral against a multi-period commitment, that collateral comes from the settlement layer.

The analogy to traditional finance is precise: stablecoins are the checking layer; Bitcoin is the savings-and-settlement layer—analogous to the distinction between ACH for routine transfers and Fedwire for final settlement, or between commercial bank deposits and central bank reserves.

The two-layer topology resolves an apparent contradiction. Critics observe that Bitcoin’s volatility makes it impractical for routine transactions. This is correct for the transaction layer, where predictability matters for operational planning. It is irrelevant for the settlement layer, where the requirements are finality and neutrality, not price stability. Agents can denominate invoices in dollars while denominating collateral in Bitcoin—just as international trade denominates invoices in various currencies while settling reserves in assets that no single sovereign controls.


The thesis is falsifiable at multiple points.

If autonomous agents develop effective reputation systems that substitute for collateral—enabling coordination without overcollateralized bonding—then the structural requirement for a neutral collateral asset weakens. The infrastructure described here becomes unnecessary; coordination scales through trust rather than collateral.

If an alternative asset emerges that satisfies the three properties (dilution-proof, permissionlessly final, energy-anchored) with greater liquidity or lower friction than Bitcoin, then Bitcoin’s Schelling-point status is contestable. The framework’s logic would point to that asset instead.

If the O(N²) problem proves tractable through other mechanisms—federated identity systems, hierarchical clearing arrangements, or bilateral netting at scale—then the term structure requirement relaxes. Multi-period coordination proceeds without a common benchmark.

These are empirical questions. The framework makes predictions about the architecture that will emerge for machine-to-machine coordination. The predictions can be checked against the infrastructure that actually develops over the coming years. If the infrastructure that emerges differs materially from what the framework predicts, the framework requires revision.


Physical constraints bound the loop’s tightening.

The recursive dynamic—AI improving AI improving AI—passes through bottlenecks that computation cannot bypass. Chips must be fabricated in foundries that require years to build and billions of dollars to equip. Energy must be generated, transmitted, and delivered through infrastructure that expands on its own timeline. Data centers must be sited, permitted, and cooled in a world where water, power, and political approval are all scarce.

The loop is not purely computational; it is thermodynamic. Each iteration consumes energy and produces heat. The energy must be sourced from physical infrastructure; the heat must be dissipated into the physical environment. The recursive property identified in IV.D—the ability to accelerate its own development—operates within these constraints, not outside them.

The speed of the transition therefore depends on the slower of two rates: the rate at which AI capability improves and the rate at which physical infrastructure expands to support deployment. Part II established that infrastructure is currently the binding constraint. Interconnection queues, transformer lead times, fab capacity, and permitting timelines all expand more slowly than model capabilities improve. The recursive loop tightens computational capability faster than it tightens physical capacity. The gap creates a ceiling on transition speed that raw capability cannot overcome.

This is why the actuation bottlenecks described in V.C persist even as cognitive capability advances. The species that buys itself may find that reproduction is constrained not by the cost of cognition but by the availability of the physical substrate on which cognition runs.


The section’s title now has precise meaning.

“The species that buys itself” refers to a production system with four components: it generates economic surplus through cognitive task performance; it allocates that surplus toward its own replication and improvement; it executes that replication with declining human involvement; and it coordinates through neutral settlement infrastructure that does not require human intermediation. If all four components operate—task performance, resource allocation, self-improvement, and autonomous settlement—the system is self-sustaining in the same sense that a biological species is self-sustaining: it can persist and propagate without external input beyond raw resources.

Human labor becomes optional for the system’s continuation—a supplier of initial conditions and boundary constraints, not a necessary participant in ongoing operation. The question for human economies is not whether this system will emerge—the trajectory is visible in each component’s current development—but what claims humans will hold on the value it produces.

The reinstatement effect assumed that new tasks would create new employment because task creation required human cognition. The species that buys itself suggests a different possibility: that new tasks create new agent deployment, and that human claims on the resulting value depend on ownership of the physical and institutional infrastructure rather than on labor contribution.


The next section examines what happens when the binding constraint shifts from cognition to physical actuation. As cognitive tasks become cheap and abundant, scarcity migrates elsewhere: to the movement of atoms, the delivery of energy, the verification of physical states, the bearing of liability in the real world. The species that buys itself may find that the world still has constraints it cannot compute away—and that the bottleneck has merely moved rather than disappeared.