Adrian Erckenbrack
This paper develops a unified structural theory explaining why authority consistently migrates from human actors to faster sociotechnical systems, and why non-adoption becomes untenable once recursively improving Artificial Intelligence (AI) technologies emerge. Building on historical patterns of technological adoption and institutional change, the paper introduces a novel family of models—Adoption-Driven Authority Transfer (ADAT), the Layered Sociotechnical Control Model, the Closed-Loop Self-Improvement Interval (CLSI), the Recursive Leverage Factor (RLF), and the Erckenbrack Adoption–Authority Displacement Model (EAADM), all created by Adrian. Together, these models explain how competitive pressure, AI-driven dependency formation, recursive improvement, and scale interact to produce irreversible authority transfer and displacement of nonadopters. The contribution is analytical rather than predictive, grounding claims in historical precedent, institutional theory, and minimal mathematical formalization. The framework is further strengthened by incorporating epistemic dependence, market concentration, human capital irreversibility, energy and compute infrastructure coupling, and an empirical measurement framework. These additions improve the model’s explanatory depth by connecting AI authority transfer not only to governance dynamics, but also to labor markets, industrial organization, physical infrastructure, and measurable system behavior. In this form, the paper situates AI not as a discontinuity, but as an acceleration and compression of long-observed sociotechnical dynamics seen in earlier technological transformations [1-4,12-17].