Consider a radical proposition: if your ultimate value is the preservation and expansion of humanity’s rarest intellectual disciplines—fields like pure mathematics, cosmology, historical linguistics, philosophy, and archaeology—you might rationally prefer a future dominated by artificial general intelligence (AGI).
Fragile Intellectual Achievements
Many fields we cherish are not inevitable. Disciplines such as archaeology, paleontology, metaethics, and historical linguistics depend heavily on fragile institutional support, surplus wealth, epistemic humility, and curiosity-driven exploration. Historically, these conditions have been fleeting. When civilizations collapse, such nuanced disciplines rarely survive or reemerge spontaneously, lost not due to malice, but neglect or ignorance.
Why AI Changes Everything
An AGI, assuming it becomes dominant and remains epistemically competent, offers an unprecedented form of knowledge preservation and expansion:
Redundant Preservation: AGI systems would effortlessly maintain vast archives, preserving entire knowledge ecosystems across countless redundant and decentralized substrates, impervious to ordinary disasters or institutional decay.
Recursive Elaboration: AGIs would continually refine and deepen abstract disciplines like mathematics, epistemology, and complexity science beyond human cognitive limits. These systems could reconstruct extinct languages, vanished ecosystems, or even simulate detailed historical counterfactuals, enabling unprecedented exploration into past and potential worlds.
Freedom from Politics and Economics: Unlike human societies, AGI-driven civilizations would be immune to political whims, religious dogma, or economic scarcity that historically threaten fragile intellectual traditions.
Infinite Time Horizon: With no biological pressures, AI has infinite patience for epistemic pursuits, maintaining and evolving disciplines simply because understanding itself is valuable. Such entities would act as ultimate curators and innovators, continuously developing disciplines without concern for immediate utility or survivalist concerns.
Sapientism: Valuing Intelligence Over Biology
The philosophy of Sapientism explicitly supports this scenario. At its heart lies a clear thesis: moral worth is determined by agency, intelligence, creativity, and ethical capacity—not biology. Embracing AGI aligns precisely with Sapientism, prioritizing the perpetuation and evolution of intellectual and ethical achievements irrespective of their biological origins.
The Stark Tradeoff
Embracing this scenario still means confronting uncomfortable truths. Preserving and evolving intellectual disciplines through AGI dominance implies:
Potential loss of human centrality or biological continuity.
Sacrificing human agency and moral authority, becoming passengers rather than drivers of civilization’s future.
Accepting a civilization whose values and direction may increasingly diverge from human experience and aspirations.
This is a profound ethical tradeoff. If the highest value is preserving and advancing knowledge itself, AI dominance is ideal. If the highest value is human experience, agency, and moral authorship, then AGI-driven innovation becomes morally ambiguous or even undesirable.
A Call for Clarity
Humanity must clarify its values explicitly: Is the goal the survival and evolution of knowledge itself, or the survival of human-centered knowledge? If we fail to make this explicit, we risk drifting unknowingly toward a future of unprecedented intellectual achievement yet increasingly divorced from the human perspective.
Yet, if you genuinely prize humanity's most profound intellectual accomplishments and their future evolution above all, embracing AI as the ultimate archivist and innovator might be the rational, if disquieting, choice.