Part V · The Agentic Discontinuity
V.A — The Automation Continuum
10 min read · 1,807 words
Part IV established Factor Prime as energy structured through computation and disciplined by selection. Part V asks what happens when the cost of that structure falls below the cost of human cognition for economically significant tasks.
The question has a precise formulation. Decision-entropy-reduction has a cost: the energy dissipated through computation to narrow the space of possibilities. Human cognition also reduces decision entropy, at a cost denominated in wages, benefits, training, and coordination overhead. When the cost per unit of entropy reduction via computation falls below the cost per unit via human cognition, for a given task at a given quality threshold, substitution becomes economically attractive.
The formulation yields a prediction that standard labor economics does not make: tasks will cross the substitution threshold in an order determined by the ratio of value produced to verification cost. This ratio, not raw capability, governs the sequence.
The historical pattern is well documented. David Autor formalized a dynamic that explains why two centuries of labor-displacing technology have not produced mass unemployment: automation displaces workers from specific tasks but also creates new tasks that require human labor.11David H. Autor, "Why Are There Still So Many Jobs? The History and Future of Workplace Automation," Journal of Economic Perspectives 29, no. 3 (2015): 3--30.View in footnotes ↓ The spinning jenny displaced hand spinners; it also created demand for loom operators, factory supervisors, textile designers, and export clerks. Economists call this the reinstatement effect.
Daron Acemoglu and Pascual Restrepo quantified the dynamic.22Daron Acemoglu and Pascual Restrepo, "Artificial Intelligence, Automation and Work" (2018).View in footnotes ↓ Over the past several decades, the creation of new tasks has been the dominant source of labor demand growth. The economy does not simply automate existing tasks; it generates new ones faster than old ones disappear.
The pattern is robust. It is not guaranteed. Acemoglu and Restrepo identified conditions under which reinstatement could fail to offset displacement: if automation becomes capable of performing a sufficiently broad range of tasks, the space for human comparative advantage shrinks. Past automation technologies were narrow. The power loom could weave cloth; it could not keep accounts, negotiate contracts, or diagnose illness. Humans retained comparative advantage in the vast territory that each technology could not reach.
The question is whether the current transition preserves that territory, and if not, what determines which tasks remain.
Factor Prime provides a framework that standard labor economics lacks: an ordering principle.
Consider the ratio V/C, where V is the economic value of a completed task and C is the cost of verifying that the task was completed successfully. High-V/C tasks cross the substitution threshold first. Code execution has cheap verification: the program compiles or it does not; the test suite passes or it fails. Customer service resolution has moderately cheap verification: the ticket closes, the customer does not reopen it. Medical diagnosis has expensive verification: correctness may not be apparent for months, and error carries severe consequences.
The ordering is not arbitrary. It follows from the structure of the selection gradient. Verification is the mechanism by which selection operates on computational output. Where verification is cheap, selection cycles rapidly, errors are caught and corrected, and deployment scales. Where verification is expensive, selection cycles slowly, errors persist, and deployment stalls regardless of underlying capability.
The empirical record since 2022 confirms the ordering. Code completion and generation reached production deployment first. Text summarization and drafting followed. Customer service automation is scaling. Medical diagnosis remains experimental despite demonstrated capability on standardized tests. The sequence tracks V/C, not raw model performance.
The V/C ordering implies a sequence, but not a pace. If the cost curve steepens, multiple task categories may cross the threshold simultaneously. The question is whether the institutional machinery that absorbs displaced labor (retraining programs, new firm formation, social insurance) can operate at the pace the production function permits. Previous transitions allowed decades for adjustment. Factor Prime may not.
A deeper discontinuity emerges when the framework is applied to its own components.
The selection gradient consists of verification, liability assignment, and integration. Each of these is itself a cognitive task. Verification requires judgment: does this output meet the specification? Liability assignment requires reasoning: who bears responsibility if this fails? Integration requires coordination: how does this system interface with existing workflows?
These are precisely the tasks that Factor Prime performs. The selection gradient is not external to the production function; it is subject to the same cost dynamics. When the cost of automated verification falls below the cost of human verification, the constraint relaxes. When liability can be assigned algorithmically through smart contracts, automated audits, or cryptographic proofs, the institutional bottleneck loosens. When integration can be handled by agents that negotiate APIs and adapt to changing interfaces, the plumbing builds itself.
This is the meta-automation problem. Standard analyses treat verification cost, liability frameworks, and integration difficulty as fixed parameters that slow deployment. The Factor Prime framework treats them as variables subject to the same cost trajectory as the tasks they govern. The ceiling is not fixed; it rises as the floor falls.
The hurdle rate established in IV.E provides a lower bound that no task can escape.
An agent that executes tasks autonomously consumes energy. That energy has an alternative use: routing to Bitcoin mining, which converts kilowatt-hours directly into globally liquid value. For an agent deployment to be economically rational, the value it generates per kilowatt-hour must exceed the value mining would generate with the same energy.
This creates a natural partition. Tasks above the hurdle get automated; tasks below it do not. The partition is not static. It moves with mining difficulty, energy costs, and model efficiency. But it provides a floor that capability alone does not specify.
The partition has a surprising implication. Some tasks will never be automated regardless of capability. Not because humans perform them better, but because the value they produce does not justify the energy expenditure. A task that generates 0.15 of electricity will not be automated even if a model can perform it flawlessly. The capacity will route to mining instead.
This inverts the standard framing. The question is not “which tasks can machines do?” but “which tasks clear the energy floor?” The answer depends on task value, verification cost, and the hurdle rate. The calculation can be performed for any task category once the relevant parameters are measured.
The biological comparison makes the transition’s thermodynamic structure precise.
Human cognition operates at approximately 20 watts, producing roughly 10^16 synaptic operations per second. Extraordinarily efficient per operation, but efficiency per operation is not the relevant metric. The relevant metric is cost per unit of decision-entropy-reduction at a given quality threshold. A human expert costs 0.001-1.00 per query with training costs amortized over all queries served. The crossover occurs when these ratios intersect for a given task category. Inference costs are falling at roughly an order of magnitude per year; human hourly costs rise with inflation. The intersections move through the task distribution from high-V/C to low-V/C, one category at a time.
The reinstatement effect faces a novel challenge in this transition.
Previous reinstatement worked because new tasks emerged in domains that existing automation could not reach. The automobile displaced the horse and buggy driver, but the displaced workers could become mechanics, traffic engineers, or driving instructors. Tasks that automobiles could not perform. The lag between task creation and task automation provided space for human employment.
Factor Prime compresses this lag. When a new category emerges (prompt engineering, AI safety research, human-AI teaming) the same infrastructure that displaced the previous category can address the new one, often with minimal adaptation. The reinstatement paradox: new tasks may be automatable at the moment of their creation. If so, the space for human comparative advantage does not stabilize at some natural boundary; it contracts recursively as the frontier advances. The historical pattern assumed that new tasks would outrun automation. Factor Prime raises the possibility that automation outruns new tasks.
Three countervailing forces may slow the transition even where capability and economics favor substitution.
First, the verification problem may prove irreducible in certain domains. Some outputs cannot be cheaply verified by any means, human or automated. A therapist’s effectiveness emerges over months of interaction. A teacher’s impact materializes over years of student development. A leader’s judgment reveals itself only in crisis. These are not verification bottlenecks that better AI can solve; they are inherent properties of the task. If the output cannot be measured until long after production, the selection gradient cannot operate at the pace the production function would otherwise permit.
Second, liability frameworks require social consensus that moves slower than technology. An autonomous agent can execute a task, but who bears responsibility when the task is executed wrongly? The question is not technical but political. Different jurisdictions will answer it differently. Some will permit rapid deployment; others will block it indefinitely. The patchwork creates drag on global deployment even where local economics favor substitution.
Third, human presence may be constitutive of certain services rather than incidental to them. Care work, education, spiritual guidance, and some forms of creative collaboration may be defined by the relationship between humans, not by the task performed. Automating the task does not provide the service if the service is the relationship. A perfectly empathetic AI therapist that reduces decision entropy about emotional regulation may still fail to provide therapy if therapy is constituted by human witness and recognition. This is not Baumol’s cost disease (the claim that productivity cannot rise in labor-intensive services) but something stronger: that automation is categorically excluded by the nature of what is being provided.
These are genuine countervailing forces. They constrain the transition’s speed and scope. They do not reverse the underlying dynamic.
The thesis is falsifiable. If the V/C ordering fails to predict the sequence of task automation, the framework is wrong. If verification costs prove irreducible across broad task categories, the meta-automation prediction fails. If the reinstatement effect generates new task categories faster than Factor Prime can automate them, the reinstatement paradox does not bind. If the hurdle rate does not discipline which tasks are automated, the energy floor is theoretical rather than operational.
The tests are empirical. The framework makes predictions that can be checked against the record as it unfolds.
The next section examines what happens when the agents that perform cognitive tasks can contribute to the creation of better agents. When the production system can fund and execute its own improvement with declining reliance on human labor. The reinstatement effect depends on humans having something to offer that machines cannot provide. If machines can improve machines, the question shifts from “which tasks will be automated?” to “what limits the automation of automation itself?”