The Mirror We Built: When Machines Learned to Dream

(History of AI, Part 1) From the first circuits flickering in Philadelphia and Kyiv to the vast neural networks of today, the story of artificial intelligence is a human odyssey — a mirror of our imagination, ambition, and desire to understand ourselves through the minds we create. Welcome to a short history of AI

Dec 20, 2025 | AI & TECHNOLOGY, ANALYSIS, HISTORY, NEWSLETTER, SOCIETY

By Jabir Deralla

AI was born from the minds of scientists, visionary thinkers, and artists — and, to a certain extent, from manipulators chasing fame and fortune. The story of artificial intelligence begins almost alongside the birth of computer technology. Today, I want to share some of those stories with you — insights my friend ChatGPT and I have gathered from the rich history of AI’s evolution.

The roots of artificial intelligence reach back to the dawn of computing itself. Long before “AI” became a buzzword, Alan Turing was already asking whether machines could think. His visionary 1950 paper, Computing Machinery and Intelligence, laid the foundation for everything that followed. Turing’s “Imitation Game” — later known as the Turing Test — was not just a technical proposal, but a philosophical challenge to humanity’s understanding of mind and consciousness.

More than a decade later, another visionary, John McCarthy, coined the term “artificial intelligence” and gave shape to a discipline that would blend mathematics, logic, and imagination. McCarthy envisioned computers that could reason, learn, and communicate — ideas that were revolutionary for the 1950s and remain relevant today.

In my book The Robot, the Human, and the Digital God (Frontline Press, 2024; print only; Macedonian), written with Valentin Neshovski, I explored how these early thinkers paved the way for today’s AI — not only through scientific breakthroughs but through their bold questioning of what it means to be human in a world shared with intelligent machines.

ELIZA: The first digital therapist

If Turing and McCarthy dreamed of intelligent machines, Joseph Weizenbaum at MIT in the 1960s gave that dream a voice — quite literally. His program ELIZA, one of the first chatbots ever created, simulated conversation with surprising fluency for its time. Using simple pattern-recognition rules and pre-programmed responses, ELIZA simulated the role of a Rogerian psychotherapist — an approach developed by Carl Rogers, based on active listening and the reflection of the patient’s own statements. The program returned users’ words in the form of questions and paraphrases, creating an illusion of empathy and understanding, even though no genuine comprehension existed behind the interaction.

The illusion worked so well that many users began to confide in the machine, believing it truly understood them. Weizenbaum was astonished — and disturbed. His secretary once asked him to leave the room so she could “speak privately” with ELIZA. That moment revealed something profound: people were ready to form emotional connections with machines that did not understand a single word they said.

This phenomenon, later known as the ELIZA Effect, exposed both the promise and peril of human-machine interaction. It showed how easily empathy could be simulated — and how willingly humans would accept it. Weizenbaum, once a pioneer, became one of AI’s most outspoken moral critics, warning that not everything a computer can do should be done.

PARRY: The paranoid successor

A few years after ELIZA, another chatbot appeared — one that could be described as ELIZA’s troubled descendant. In 1972, psychiatrist Kenneth Colby developed PARRY, a computer program that simulated a person suffering from paranoid schizophrenia. While ELIZA mirrored emotions neutrally, PARRY expressed fear, suspicion, and anger. It didn’t just repeat phrases; it defended itself, argued, and accused others of plotting against it.

PARRY did not represent a gendered character, but rather a cognitive model of paranoid reasoning, stripped of personal identity beyond its diagnostic profile. In practice, however, many readers and clinicians implicitly assumed PARRY was male — largely because the simulated case was presented as an adult patient at a time when male default assumptions were common in clinical and technical contexts of the 1970s.

When professional psychiatrists interacted with PARRY, many could not distinguish its responses from those of real patients. In a striking experiment, transcripts of conversations between PARRY and human patients were shown to psychiatrists, who correctly identified which was the computer only about half the time — pure statistical chance.

The unsettling realism of PARRY raised new ethical questions: if a machine could convincingly mimic mental illness, where should the line be drawn between simulation and deception? Unlike ELIZA, which invited empathy, PARRY exposed the darker psychological mirror of AI — how easily code could imitate human suffering, and how fragile the boundary between reality and illusion could become.

SHRDLU: The machine that understood a world

By the early 1970s, the dream of machines that could truly understand human language began to take form in a small, digital world of blocks, pyramids, and spheres. At MIT, computer scientist Terry Winograd created SHRDLU, a program that could not only hold a conversation but also act on it — within its simulated environment. When asked, “Pick up the red block and put it on the green cube,” SHRDLU would do exactly that, then describe what it had done.

This was a leap far beyond ELIZA and PARRY. SHRDLU combined natural language processing, logic, and symbolic reasoning — it could ask clarifying questions, remember context, and even explain its own decisions. In many ways, it foreshadowed the reasoning mechanisms behind today’s AI models, which can analyze, respond, and reflect.

Yet SHRDLU’s brilliance also revealed AI’s limitations. It could understand the rules of its tiny world perfectly — but outside that world, it knew nothing. This became a metaphor for the field itself: intelligence that shines within a narrow domain but falters in the vastness of real life. Winograd later turned to philosophy and human-centered design, questioning whether true understanding could ever arise from computation alone.

The origin of the name SHRDLU

SHRDLU isn’t an acronym — it’s actually a nonsense word taken from the old days of typesetting.

In manual printing and teletypesetting, the sequence of letters S-H-R-D-L-U represented the most common letters in English arranged by frequency (from an old Linotype keyboard). Operators used to type “etaoin shrdlu” as a quick filler or error test — similar to typing “asdfgh” on a modern keyboard. Sometimes, these nonsense sequences accidentally made it into printed newspapers!

Why Terry Winograd chose it?

When Terry Winograd built his 1970 AI program, he playfully borrowed SHRDLU as its name. It reflected the mechanical roots of language (from printing and machinery), and the synthetic, constructed nature of his program’s world — a language world built by machines.

So, in short, SHRDLU has no secret meaning — it’s a clever nod to the history of text and machines, symbolizing the bridge between human language and computation.

Funding and investment in the early days

By the mid-1960s, the U.S. Department of Defense (via DARPA, then ARPA) was heavily funding AI research at universities (MIT, Stanford, Carnegie Mellon). For example, in June 1963 MIT received a $2.2 million ARPA grant to fund its Project MAC/AI group; annual sums of around $3 million followed for some labs.

The 1980s saw another major surge: AI went commercial, expert systems became big business. By 1988 the AI industry was reported to be worth billions of dollars. Government-large initiatives include the U.S. Strategic Computing Initiative (1983–1993), reported to spend about $1 billion on computing & AI research.

It is important to note that early funding was not driven by curiosity alone — it was deeply tied to the Cold War, national defense, and prestige, much like the era of the space race. Money have flooded research teams and institutes. The money was given with few strings attached. ARPA actually believed in ‘fund people, not projects.’

What began as a philosophical question, soon became a global obsession. Early dreams of thinking machines sparked scientific breakthroughs, vast public and private investment, and a deepening commitment to research — often unfolding all at once, rather than in orderly stages.

Though national strategies, ideological goals, and public expectations diverged, artificial intelligence, once brought into existence, appeared to follow a life of its own. Carried forward by the persistence of scientists and engineers, it evolved beyond any single vision or authority. By the end of its first chapter, AI was no longer merely an idea — it was a force, poised to be shaped not only by curiosity, but by power.

 


Author’s Note
This series was written by the author with the support of AI-assisted research, language refinement, and editorial dialogue using OpenAI’s ChatGPT. All interpretations, arguments, and conclusions remain solely the responsibility of the author.


Image credit
AI-generated illustration created with the assistance of OpenAI tools.


Next in the series
The Ideological Abyss — Control vs. Cognition


Copyright notice
© 2025 Jabir Deralla. All rights reserved.
No part of this article may be reproduced, distributed, or republished in full without prior written consent of the author. Short quotations are permitted with proper attribution and a link to the original source.

Truth Matters. Democracy Depends on It