Part IV · The Derivation
IV.D — The Recursive Property
9 min read · 1,663 words
IV.A established that computation can improve itself. IV.B showed that selection filters thermodynamic depth into economic value. IV.C tested Factor Prime against Perez’s criteria. But none of these sections addressed the question that matters most for the trajectory of the transition: how fast can the acceleration go, and what determines its limits?
The recursive property—the capacity of computation to reduce the cost of producing additional computation—is not categorically new. Previous key inputs also participated in feedback loops. Steam power improved coal extraction via pumps, rail, and mechanized hauling. Electricity and control systems optimized grids. Oil improved oil production via petrochemicals and better drilling rigs. The loops existed. Humans closed them.
What differs is who—or what—closes the loop.
Paul Romer, the economist whose work on endogenous growth theory earned him the Nobel Prize in 2018, formalized the idea that knowledge differs from ordinary goods.11Paul M. Romer, "Endogenous Technological Change," Journal of Political Economy 98, no. 5 (1990): S71--S102.View in footnotes ↓ Physical goods are rivalrous: if one firm uses a machine, another cannot use it at the same time. Knowledge is non-rivalrous: if one firm uses a formula, another can use it too, without diminishing the first firm’s use. This non-rivalry means that the returns to knowledge can be increasing rather than diminishing. A new idea can be applied across an entire economy; the marginal cost of the millionth application is no higher than the first.
Romer’s framework explains why economies that invest in research can grow faster than economies that do not. But it treats the production of knowledge as separate from the production of goods. There is an R&D sector that produces ideas, and a goods sector that uses them. The two are linked—investment in R&D raises productivity in goods production—but they remain distinct, and human researchers occupy both.
Factor Prime compresses this separation. The tools that produce knowledge are themselves computational and increasingly automated. A neural network trained to generate code can write code that trains neural networks. A chip design tool can design the next generation of chip design tools. The output of the process is an input to the process. The question is whether this changes the dynamics of growth—or merely accelerates the same dynamics by a constant factor.
Charles Jones, an economist who has written extensively on the sources of long-run growth, introduced a framework for thinking about precisely this question.22Charles I. Jones, "Growth and Ideas," in Handbook of Economic Growth, ed. Philippe Aghion and Steven N. Durlauf, 1063--1111.View in footnotes ↓ He distinguished between “semi-endogenous” growth, where research productivity depends on the stock of existing knowledge but faces diminishing returns, and “fully endogenous” growth, where the production of ideas can accelerate without bound.
The distinction is not merely taxonomic. It determines the long-run trajectory.
In semi-endogenous models, the rate of idea production depends on the number of researchers and the existing stock of knowledge, but each additional idea is harder to find than the last. Growth eventually converges to a steady rate determined by population growth or other exogenous factors. The feedback loop exists, but it is damped. The system is stable.
In fully endogenous models, ideas beget ideas without diminishing returns. The production of knowledge can accelerate indefinitely—or at least until some other constraint binds. The feedback loop is undamped. The system can exhibit explosive dynamics.
The empirical question is which regime we are in. If Factor Prime production is semi-endogenous, then the current acceleration will eventually level off. The recursion provides a tailwind, but human researchers remain the pacing constraint, and human population growth is slow. If Factor Prime production is closer to fully endogenous over the relevant range, then the acceleration can continue until physical limits—energy, materials, heat dissipation, coordination costs—become binding.
William Nordhaus, the economist who shared the 2018 Nobel with Romer, addressed this question directly in a 2021 paper titled “Are We Approaching an Economic Singularity?”33William D. Nordhaus, "Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth," American Economic Journal: Macroeconomics 13, no. 1 (2021): 299--332.View in footnotes ↓ Nordhaus examined the conditions under which technological progress could accelerate without limit, producing a “singularity” in the growth rate. He identified seven tests that the historical record would need to satisfy for singularity dynamics to be plausible.
His conclusion: the historical record shows no evidence of such acceleration. Growth rates have fluctuated but not systematically increased over the past century. The returns to R&D investment have not risen. The share of the economy devoted to research has grown, but output growth has not accelerated proportionally.
But Nordhaus did not rule out the possibility. The key variable, in his analysis, is whether the automation of innovation itself becomes possible. If machines can contribute to invention, and if the machines that invent can be improved by the machines they help create, then the feedback loop can tighten beyond any historical precedent. The historical record would cease to be a reliable guide.
This is precisely the question Factor Prime raises. The recursion is not new. What may be new is the degree to which computation—rather than human cognition—closes the loop.
Most technological revolutions preserve a human pacing function. The rate of improvement in steam engines depended on how fast human engineers could design, test, and build better engines. The rate of improvement in automobiles depended on how fast human designers and factory workers could iterate on the product. Humans set the tempo, and the tempo was limited by human cognition, human coordination, and human lifespans.
In previous regimes, the feedback loops were human-cognition-limited: an engineer had to observe a problem, design a solution, build a prototype, test it, and iterate. The cycle time was measured in months or years. The bandwidth was constrained by the number of skilled humans available and the speed at which they could think and coordinate.
Factor Prime tightens the computational portions of these loops. Cycle times compress from months to days or hours where computation substitutes for cognition. Parallelism increases: thousands of experiments can run simultaneously. The feedback bandwidth can rise dramatically, even if key decision points still require human oversight and even if physical portions—fab construction, power plant permitting, grid interconnection—remain human-paced.
The question is whether the computational portions of the loop are becoming large enough to change the aggregate dynamics. If they are, then the growth regime may be shifting from semi-endogenous toward something closer to fully endogenous—at least over some range, until physical constraints bind.
This framing yields specific empirical tests. The claim that Factor Prime is shifting the growth regime from semi-endogenous toward fully endogenous implies observable consequences:
Test 1: Research productivity. If the recursion is tightening, research productivity—measured as capability gain per researcher-hour or per dollar of R&D spending—should be rising, not falling. The historical pattern shows declining research productivity across most fields. If AI reverses this trend in AI research itself, that would be evidence for regime shift. Falsifier: if research productivity in AI continues to decline at historical rates despite increasing compute, the recursion is not strong enough to matter.
Test 2: Cycle time compression. If computation is substituting for human cognition in the improvement loop, iteration cycles should be compressing. The time between major model generations, the time to adapt architectures to new domains, the time to identify and fix capability gaps—all should be falling faster than historical trends in other fields. Falsifier: if iteration cycles plateau or lengthen, the human-paced portions of the loop remain dominant.
Test 3: Parallelism scaling. If the recursion is computational rather than human-limited, the number of simultaneous experiments and research directions should scale with compute investment, not with researcher headcount. Organizations with more compute should explore more of the design space per unit time. Falsifier: if research output remains proportional to headcount regardless of compute, the recursion is not automating the relevant bottleneck.
Test 4: Physical constraint binding. If the recursion is strong, physical constraints—energy, chips, cooling, interconnect—should bind before cognitive constraints. Investment should flow toward infrastructure rather than toward hiring researchers. Falsifier: if talent remains the binding constraint and infrastructure is abundant, the recursion has not shifted the bottleneck.
Test 5: Acceleration in the growth rate. Nordhaus’s ultimate test. If the recursion is producing fully endogenous dynamics, the growth rate of relevant metrics—capability, efficiency, deployment—should itself be increasing, not merely high. Falsifier: if growth rates are high but stable, the dynamics remain semi-endogenous.
The evidence is mixed. Research productivity in AI appears to be rising by some measures—capability per dollar of training compute has improved faster than in most historical fields—but the metrics are contested and the time series is short. Cycle times between major model generations have compressed from years to months. Parallelism has scaled dramatically; frontier labs run thousands of experiments simultaneously. Physical constraints are increasingly cited as binding; infrastructure investment is outpacing researcher hiring at the margin.
But the growth rate of capability has not obviously accelerated. Scaling laws suggest predictable, not accelerating, improvement with compute. The laws could steepen or saturate at any point; they are empirical regularities, not physical necessities. The recursion creates the possibility of acceleration; the evidence does not yet confirm it.
The uncertainty is genuine, and the manuscript does not claim to resolve it. What can be said is this: Factor Prime introduces a structural possibility that previous paradigms did not. If computation can contribute to its own improvement, then the pacing function that has governed all previous technological transitions may shift. The rate of progress would depend not only on human effort but on machine effort, and machine effort can scale in ways that human effort cannot.
Whether it will scale, and how far, remains an open question. Energy must be generated, transmitted, and dissipated. Chips must be fabricated, packaged, and connected. Data centers must be sited, powered, and cooled. Coordination costs rise with scale. Regulatory and social constraints may bind before physical ones do. The recursive property creates the possibility of acceleration; physics, economics, and institutions will determine the outcome.
The next section examines what this structural possibility implies for an existence proof: a case where thermodynamically grounded computation has already created durable economic value.