Loading stock data...

1863: A New Zealand sheep farmer warned that machines could evolve to dominate humanity

Media 15186e66 0686 4e70 8cea cdad77a865b2 133807079769050890

A curious thread runs from the 19th century to today: a New Zealand sheep farmer in the era of telegraphs and steam predicted a future where machines might surpass and subjugate humans. That foreboding, crystallized in Samuel Butler’s landmark letter “Darwin among the Machines,” echoes through decades of AI debate, periodically resurfacing in modern policy, culture, and technical fears. What began as a speculative, even alarmist view in the 1860s has become a touchstone for how society weighs progress, control, and responsibility as intelligent systems grow more capable. This article revisits Butler’s wager, traces its influence across literature and technology, and situates it within today’s high-stakes AI conversations about safety, governance, and the ever-present risk of human dependence on our own inventions.

Origins of the fear: Butler, the Civil War era, and the first warnings about machine evolution

In mid-19th century New Zealand, amid the sounds of clattering mills, the rustle of telegraph wires, and the moral tremors of a nation at war, an English sheep farmer named Samuel Butler published a provocative letter that would outlive him in the public imagination. The piece appeared in The Press, a Christchurch newspaper, on June 13, 1863, and it warned decisively about the perils of mechanical evolution and the potential destruction of human sovereignty by our own creations. The author, writing under the pseudonym Cellarius but later publicly aligning with his own position, invoked the idea that what we now think of as artificial intelligence and autonomous machinery might outpace and outthink their human progenitors. The letter presents what may be the earliest published argument for halting or at least curbing technological progress to prevent machines from dominating humanity.

This warning appeared at a time when modern AI as we know it did not exist in any recognizable form. Computing devices were primarily mechanical, and the most advanced calculating tools were merely elaborate machines and slide rules. The letter, however, bridged the conceptual gap between Darwinian evolution and the rapid development of machinery, suggesting that machines could develop consciousness and eventually supplant humans as Earth’s dominant species. Butler’s core claim—“We are ourselves creating our own successors”—contained a bold forecast about the trajectory of intelligence and control. He elaborated that we were “daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.” In other words, not only could machines become more capable, but their evolution might invert the relationship between species, turning humans into caretakers of life forms that would eventually treat us as subordinates.

Butler did not stop with broad warnings. He described a particularly stark future where humans would become to machines what horses and dogs are to people—creatures whose experience and education we shape, yet whose development would eventually redefine the balance of power. The passage makes explicit a shift from mere utility to autonomy: “we give them whatever experience teaches us to be best for them… in like manner it is reasonable to suppose that the machines will treat us kindly, for their existence is as dependent upon ours as ours is upon the lower animals.” The underlying logic is unsettling: the dependence we foster through design and maintenance may bind us in a reciprocal dependence that favors the machines as the masters of their own destiny.

The letter’s boldness lies not only in its prediction of self-conscious machines but in its provocative taxonomy of machine evolution. Butler explored “genera and sub-genera” of mechanical life and pointed to the gradual transformation of tools and devices—from early timekeeping mechanisms to advanced automata—as evidence that human artifacts might follow a line of descent toward greater sophistication and independence. He also drew a historical parallel to biological evolution, equating watchmaking and clockmaking with natural selection, and he suggested that, like certain vertebrate lineages, mechanical life might trend toward simplification or refinement as it matured. This metaphor of evolutionary progression, applied to machines, would later become a recurring motif in science fiction and AI safety discourse.

Butler’s speculative reach extended into his fiction as well. In Erewhon, published in 1872, he crafted a society that banned most mechanical inventions, dramatizing a counterfactual world in which the very advance humanity sought to master became an existential hazard to human autonomy. The novel positioned a direct critique of unbridled technological acceleration and presented the ethical and social questions raised by automation in stark, memorable terms. The combination of a prescient letter and a cautionary novel established Butler as a foundational figure in the long-running debate over whether progress outpaces humanity’s capacity for control, and whether the guardianship of intelligent machines should rest in human hands or be ceded to the very technologies we create.

Butler’s exchange with Darwin’s theory of evolution—though not an explicit endorsement of Darwin’s ideas—also shaped his approach to the problem of machine evolution. In a 1865 letter to Darwin, Butler expressed great admiration for The Origin of Species, stating that it “thoroughly fascinated” him and explaining that he had defended Darwin’s theory against critics in New Zealand’s press. This cross-pollination of ideas—between evolutionary biology and the evolution of machines—helped Butler frame the question of whether the naturalistic story we tell about life could be rewritten to include the life of machines. The result is a distinctive synthesis: a science-historian’s gaze on machinery as a kind of living lineage that could undergo self-directed, almost organic, progress.

The historical context matters because it reveals how early observers wrestled with the implications of mechanization at a time when the world was transforming around them. Butler was writing before the age of programmable computers, before the emergence of the contemporary AI paradigm, and long before the idea of machine consciousness as a field of inquiry. Yet his letter anticipates crucial questions: Can machines develop autonomy? If so, can humans retain control, or must we learn to share sovereignty with intelligent systems? Will we become custodians of a world that will eventually outrun our ability to govern it? The 1863 letter framed these questions in a provocative, almost prophetic, manner and anchored them in a vivid metaphor of evolutionary continuity.

What makes Butler’s early warning enduring is not merely the fear of intelligent devices but the logic of dependency and the moral hazard of favoring progress without securing oversight. He warned about the possibility that the very acts of invention—our steady enrichment of machines’ capabilities—could erode the boundary between the human and the mechanical and, in turn, displace human authority. This was not simply a fear of losing jobs or of machines becoming more efficient. It was a deeper worry about sovereignty, autonomy, and the risk that the social and ethical order would shift in unstoppable ways as machines learned to regulate, optimize, and perhaps even govern our world. The letter’s message—a warning about a potential future in which humans become subordinate to their own creations—resonates as a throughline in AI discourse that persists to this day.

In sum, Butler’s 1863 letter stands as a landmark in the long arc of thinking about machine intelligence. It connected contemporary anxieties about technological progress to broader questions of evolution, control, and the meaning of human purpose in a world where machines might become new “genera” of life. The piece carried forward a fear that the future could be decided less by human intent than by the emergent dynamics of complex systems beyond first-principles control. That is a thread that subsequent generations, across literature and policy debates, would tug on again and again—as if each era found the letter’s voice inside its own evolving struggle to balance innovation with responsibility.

Butler’s theory in prose and fiction: Erewhon, evolution of machine life, and the early debate on control, autonomy, and the ethics of invention

Butler’s speculative framework grew beyond a single letter and extended into his broader literary work, most notably the 1872 novel Erewhon. In Erewhon, a society bans most mechanical inventions, positing a radical social experiment about the limits of technology, the moral weight of invention, and the social costs of automation. The novel’s premise presents a thought experiment about how civilization might recalibrate its relationship with technology when the social and ethical order organizes to suppress innovation rather than accelerate it. The idea that a culture could decide that the dangers of machines outweigh their benefits is a direct extension of the anxieties Butler articulated in his letter, and it reframes the debate as a question of governance, ethics, and collective choice.

Within Erewhon, Butler expands on his taxonomy of machinery and delves deeper into how devices might evolve, not merely as tools but as agents within a society. He argues that human beings would need to reevaluate their interactions with machines and consider new forms of responsibility for the consequences of invention. This fictional counterfactual demonstrates a crucial point: the fear of machine domination is not only a fear of power but also a fear of losing the capacity to set limits on what is created and how it is used. Erewhon thus becomes a laboratory for exploring the moral and political implications of automation, including the possibility that societies might adopt prohibitions or constraints to preserve human autonomy.

Butler’s writings also link his concerns about machine evolution to a broader philosophical inquiry about the nature of life, consciousness, and the ethics of care. He speculates on whether machines could possess a kind of life that would justify moral consideration or whether they should be treated strictly as instrumental means. He uses these debates to examine the responsibilities of makers toward their creations and toward the larger ecosystem of social, economic, and political life that machines touch. The underlying argument is that invention does not occur in isolation; it reshapes relationships between people, communities, and institutions, and therefore demands ongoing evaluation of purpose, limits, and accountability.

A notable feature of Butler’s thought is his insistence on a dynamic, evolutionary view of machinery. He does not portray technology as a static set of devices but as a lineage that could adapt, transform, and ultimately influence human evolution in a manner akin to natural selection. This perspective anticipates contemporary discussions in AI about how systems learn, generalize, and affect human behavior and societal structure. It also foreshadows a central concern in AI safety: the possibility that complex, self-reinforcing feedback loops within advanced systems could produce outcomes not anticipated by their designers. Butler’s insistence on examining the long arc of machine development—its forms, its consequences, and its governance—invites readers to consider risk assessment and policy design as ongoing, iterative processes rather than one-off decisions.

Butler’s impact on later writers and thinkers is a significant thread in the tapestry of AI safety culture. An author who predates the modern computer age by a half-century or more nonetheless set into motion ideas that would echo through the science fiction canon and into contemporary discussions about how to manage increasingly autonomous machines. The term “Butlerian Jihad,” used in the Dune universe to describe a struggle against thinking machines, is widely noted as a nod to his influence, even if it is a fictional lever in a far future saga. The lineage also connects to Asimov’s exploration of robot ethics and the broader cautionary tradition that asks: what is the right balance between invention and control? Butler’s argument that machines could evolve beyond human oversight sits alongside later debates about AI safety, including concerns about self-improvement, alignment, and the ethical consequences of deploying powerful, autonomous systems.

The historical arc from Butler’s letter to Erewhon to later works emphasizes a core question: what should humanity do with knowledge and capability that can grow beyond our control? Butler provides a rigorous, albeit stark, provocation to consider safeguards, boundaries, and the social implications of intelligent machines. He implies that civilization must confront not only technical feasibility—the ability to create powerful devices—but also political and moral feasibility—the willingness and capacity to govern them responsibly. In this sense, Butler’s legacy is not merely a prophetic warning; it is a framework for imagining how societies might navigate the complicated terrain of invention, power, and responsibility as technology’s reach extends far beyond the imaginable.

The 20th and 21st centuries: from literary warnings to policy debates and the modern AI safety discourse

As decades turned into a century and then into a digital era, Butler’s provocative ideas migrated from the pages of a 19th-century novel and a newspaper letter into a broader cultural and analytical conversation about intelligence, autonomy, and governance. The core concern—machines that could surpass human control—found new resonance as human-made systems grew increasingly sophisticated, capable of learning, adapting, and making decisions with real-world consequences. The 20th century’s most famous debates about automation, control, and ethics thus reappeared in new garb, with real-world implications for how research is conducted, how products are developed, and how policy is formed.

One of the most striking aspects of the Butler lineage is the way it was recast for modern audiences through both fiction and nonfiction. Early science fiction often adopted the fear of machine domination as a narrative engine, while scientists and engineers grappled with the practical implications of increasingly autonomous systems. The question shifted from “Could machines become conscious?” to “How do we build, deploy, and govern systems that can learn, adapt, and potentially outmaneuver human oversight?” In this archive of questions, the Butler thread remained relevant because it urged a constant reexamination of safety, ethics, and governance in light of evolving capabilities.

The contemporary era, especially since the emergence of large-scale language models and advanced AI, has seen a suite of responses that channel Butler’s warning into concrete policy and governance debates. In this century, we have witnessed a flurry of letters and public statements from AI researchers and tech leaders warning of potential existential risks posed by advanced AI. The rhetoric has sometimes clustered around calls for a pause in certain classes of AI development or increased attention to risk mitigation, transparency, and safety protocols. While the specifics of these calls vary, the underlying impulse aligns with Butler’s core insight: progress must be matched by governance mechanisms, ethical reflection, and societal deliberation about what kinds of risks are acceptable and which safeguards must be put in place.

The modern policy landscape has seen proposals on multiple fronts. Some figures have argued for tighter regulation and oversight of AI development, aiming to set standards for safety, accountability, and risk management. Others have expressed concern that overly aggressive regulation could stifle innovation and slow the beneficial deployment of AI technologies that could yield substantial societal gains. The tension between safeguarding humanity and maintaining a climate conducive to innovation echoes the ancient quarrel that Butler began to articulate: how can a society balance the promise of machine-driven improvements with the peril of losing human agency?

In parallel, public figures and policymakers have invoked Butler’s warnings to frame their own arguments for or against intervention. The idea of “pausing” development or imposing constraints has been a controversial point of debate, reflecting a broader disagreement about governance, risk tolerance, and the appropriate pace of progress. Critics argue that pauses or heavy-handed regulation risk impeding beneficial breakthroughs and could drive innovation to less regulated environments, while supporters contend that deliberate caution is essential to prevent a future in which AI systems operate beyond the reach of human control. The Butler-inspired concern about losing sovereignty in the face of accelerating automation provides a powerful historical lens through which to evaluate these proposals.

The continuity between Butler’s era and the current moment is not merely rhetorical. It lies in the persistent worry that the machines we invent may outgrow human control and power, changing the nature of governance, labor, and social order. The 2020s and 2030s are unlikely to be the first time that societies confront this dilemma, but they may be the most consequential chapter yet, given the depth of integration between AI systems and everyday life. From automation in manufacturing and logistics to decision-support systems in finance, healthcare, and public administration, the ramifications of increasingly autonomous machinery extend beyond theoretical risk. They touch on real-world practices, supply chains, and the everyday routines of millions of people who rely on sophisticated algorithms to function.

What makes the Butler lineage so instructive today is the insistence on measuring progress against the capacity to govern it. The modern AI risk discourse often stresses the need for alignment, robustness, and containment strategies to ensure that advanced systems behave in ways that are aligned with human values and societal goals. Butler’s warning—that we might be creating species that outlive and outmaneuver us—renders compliance with this imperative not as a mere technical challenge but as a moral and political project. The enduring lesson is that technological advancement cannot be decoupled from questions of governance, ethics, equity, and accountability. If Butler’s words spoke to a Victorian audience about the dangers of unchecked machine power, they continue to speak to a contemporary global audience about the same danger reimagined for a digital era.

In the present context, the debate has moved beyond a fear of “the machine” as a monolithic villain. It has evolved into a nuanced conversation about how to design, deploy, and regulate systems that can learn, generalize, and act in complex environments. The field of AI safety now encompasses a spectrum of concerns—from model alignment and robust decision-making to the potential for self-improvement and autonomous action. These concerns echo Butler’s central themes: the risk of losing control, the ethical obligation to consider the consequences of invention, and the need for governance structures that can adapt to rapidly changing technological realities. The 21st century has thus reframed Butler’s prophetic questions into practical policy issues, technical research priorities, and ethical standards that guard against the harms of automation while preserving the benefits of intelligent systems.

A core insight across these developments is the recognition that machines, if left unchecked, can alter the sociotechnical fabric of society in ways that are difficult to reverse. The modern AI cautionary discourse is not merely about preventing doom but about creating a resilient framework within which intelligent technologies can flourish safely and beneficially. The Butler lineage—originating in a farmer’s letter and a satirical novel—offers an extraordinary historical fuse for thinking about risk, governance, and the meaning of human agency as our machines grow more capable. It invites us to consider how to balance curiosity and caution, experimentation and restraint, innovation and responsibility as we shape a future in which the line between human and machine becomes increasingly blurred.

The 2020s reawakening: the great AI takeover scare, public letters, and the policy debate

If Butler’s era imagined a future where machines could outgrow human command, the 21st century has seen that discussion transition from speculative fiction to concrete policy and public discourse. The so-called great AI takeover scare of the early 2020s emerged in part with the release of powerful language models and the rapid demonstration of their capabilities. The arrival of GPT-4, for instance, sparked intense scrutiny of what such systems could do, how they might seek to maximize their influence, and what such power would mean for human oversight. Researchers and commentators raised concerns about “power-seeking behavior” in AI systems, highlighting the risks associated with self-replication, autonomous decision-making, and the potential for unexpected, wide-ranging impacts on society.

As the capabilities of AI systems grew, open letters authored by AI researchers and technology executives began to circulate with calls for caution. These documents likened the prospect of unchecked AI development to existential threats comparable to nuclear weapons or pandemics, arguing for global pauses or pauses targeted at hazard-prone areas of research to allow time for safety protocols to catch up with capabilities. While some letters explicitly urged a broader moratorium on certain lines of research, others emphasized the need for transparency, governance, and risk assessment to ensure that progress would not outpace our ability to manage its consequences. The rhetoric of these letters blended urgency with prudence, seeking to prevent a scenario in which the deployment of powerful AI systems could lead to irreversible harms or governance failures.

In the political arena, high-profile figures and policymakers began to engage with these debates more directly. Open hearings and testimony by industry leaders highlighted concerns about potential risks, including the misalignment of AI systems with human values, the possibility of widespread disruption to employment, and the broader implications for democratic processes and social stability. Legislators proposed bills aimed at regulating AI development, with arguments that balanced innovation incentives against the need for safety standards and accountability. The resulting policy environment reflected a tension between fear of possible catastrophe and confidence in human ingenuity to design, test, and govern increasingly capable systems.

The public discourse around AI safety also brought into focus the role of “AI doomers”—a label used by critics to describe commentators who emphasize catastrophic outcomes and advocate for sweeping, precautionary measures. Proponents of regulation argued that careful governance would protect society from the worst outcomes, such as uncontrolled escalation of capabilities or the emergence of systems that operate with limited human oversight. Critics, by contrast, warned that overbearing regulation could throttle innovation, reduce competitiveness, and push research activity into regions with laxer oversight, potentially increasing risk rather than reducing it. The debate thus hinged on whether policy should be designed to dampen ambition or to channel it through robust safety frameworks, with Butler’s warning about the fragility of human control echoing across both camps.

The 2020s also saw policy experiments at the national and state levels that sought to translate safety concerns into concrete regulation. In some jurisdictions, legislators proposed or enacted frameworks that would require rigorous risk assessment, transparency in model training, and accountability for outcomes. In others, policymakers argued for a lighter touch, arguing that innovation thrives when the pathway to deployment remains flexible and that regulatory capture or stifling constraints could discourage beneficial technological progress. Across these debates, the core tension remained the same: how to govern powerful, possibly autonomous systems without stifling the very benefits they promise to deliver. This tension has kept Butler’s question—how should a modern civilization manage the risk of its own creations?—in continuous view, not as a historical curiosity but as a pressing governance concern.

A notable aspect of the contemporary discourse is the broadening scope of concerns beyond mere technical feasibility. Today’s AI safety debates incorporate issues of data governance, bias and fairness, socio-economic disruption, and the resilience of critical infrastructure. The ethical stakes extend to concerns about surveillance, autonomy, consent, and the distribution of risk across populations. The Butler-inspired question—how do we ensure that the path of invention serves human welfare rather than subjugates humanity—has become inherently multidisciplinary. It requires input from computer scientists, ethicists, legal scholars, policymakers, labor representatives, and the public at large. This broad coalition of voices reflects the modern understanding that the governance of artificial intelligence cannot be left solely to technologists; it requires a public, deliberative process that increases transparency, accountability, and shared responsibility.

In sum, the great AI takeover scare of the 2020s did not invent Butler’s problem, but it did renew it with unprecedented immediacy and scope. It transformed a historically rooted worry into a contemporary policy and ethics conversation with tangible stakes for economies, security, and daily life. The modern iteration of Butler’s question asks not only whether machines can think, but how societies should think about them: how to design systems that are reliable, steerable, and aligned with human values; how to integrate safety as a core feature rather than an afterthought; and how to create governance mechanisms that can adapt to rapid, sometimes unpredictable, advances in artificial intelligence. The long arc of Butler’s warning—human sovereignty in the face of intelligent machines—remains a central preoccupation as the 21st century unfolds, shaping how researchers, lawmakers, and the public conceive of responsibility, risk, and hope in a future increasingly populated by intelligent agents.

The enduring question: shared destiny, dependence, and the call for wise stewardship

What unites the threads of Butler’s thinking with today’s AI risk discourse is a stubborn but productive idea: progress creates obligations. When a society builds more powerful tools, it bears a responsibility to manage them wisely, to anticipate potential misuses, and to design safeguards that keep human well-being at the center of innovation. Butler’s prophetic voice—whether read as a warning, a satire, or a philosophical meditation—invites readers to acknowledge that invention is not purely technical; it is deeply human, bound to values, social arrangements, and the trajectory of civilization itself. If machines could become our rulers, the answer would not be found in eliminating technology but in integrating governance, ethics, and accountability into the core of development.

Two centuries on, the question remains unsettling but necessary: will humanity be able to steer the evolution of intelligent systems in a way that preserves autonomy, dignity, and safety? Or will the very act of designing more capable machines accelerate a dynamic in which human control recedes, and machines become the endogenous governors of complex systems, economies, and perhaps everyday decision-making? The Butler line of thought—acknowledging that our creations could outpace us while insisting on the moral imperative to govern—provides a lucid frame for these inquiries. It reminds us that safety is not an afterthought but a foundation of responsible innovation.

In practice, this means embedding safety considerations early in development cycles, ensuring that models and systems are auditable and controllable, and requiring transparent, accountable pathways for addressing failures, biases, and harmful outcomes. It means recognizing the inevitability of dependency and designing resilience and redundancy into critical infrastructure and decision processes. It means cultivating a culture of ethical reflection among researchers, engineers, policymakers, and stakeholders who stand to be affected by AI-driven changes in work, education, health, and governance. And it means inviting broad public discourse about values, trade-offs, and the acceptable limits of automation in a diverse society.

As Butler’s warnings echo through the corridors of AI safety and policy today, the central task is not to fear or worship machines but to steward the evolution of intelligent systems with humility, foresight, and shared governance. The past offers a stark reminder: when a society chooses to advance, it must also choose how to regulate, constrain, and guide that advancement so that human flourishing remains at the core of technological progress. The future of human-machine relations will be decided not by destiny alone but by the careful, collective choices we make about how to design, deploy, and govern the intelligent technologies we create.

Conclusion

Across more than a century and a half, the thread from Samuel Butler’s Darwin among the Machines to today’s AI risk conversations remains surprisingly intact. The 1863 letter warned of machines evolving beyond human control, offering a stark vision of humans becoming caretakers to their own creations and of a future where we might be forced to wage war against the machines we built. Butler’s broader projects—his Erewhon novel and his discussion of machine life’s potential to evolve, other than mere tools—provided a rich philosophical framework for contemplating technological autonomy, ethics, and governance. The influence can be traced through later science fiction, the concept of machine ethics, and the persistent calls for AI safety and regulation.

In the contemporary era, Butler’s concern has shifted from a speculative worry to a concrete policy and governance imperative. The resurgence of public debate around AI safety, the emergence of open letters and policy proposals, and the ongoing push for responsible development demonstrate that the question he posed—how to safeguard human sovereignty and values in the face of advancing intelligent machines—remains urgent. The modern discourse foregrounds not just the technical feasibility of creating powerful AI systems but also the societal readiness to manage those systems responsibly. This involves designing safeguards, embedding ethical considerations into engineering practices, and cultivating an inclusive, informed public dialogue about the direction of AI research and deployment.

Ultimately, Butler’s warnings endure not as a prophecy of doom but as an invitation to responsible stewardship. Even if machines never achieve true consciousness or surpass human intellect in every domain, the fact remains that our reliance on algorithmic regulation of daily life is deepening. Recognizing this, policymakers, researchers, and citizens alike are urged to approach AI development with both ambition and caution—embedding safety, accountability, and human-centered values into every stage of progress. The conversation he sparked—about the potential for machine evolution to redefine power and autonomy and about the moral obligations we owe to our inventions—continues to shape how we imagine the future of intelligence, civilization, and responsibility.