By Jabir Deralla
AI was born from the minds of scientists, visionary thinkers, and artists — and, to a certain extent, from manipulators chasing fame and fortune. The story of artificial intelligence begins almost alongside the birth of computer technology. Today, I want to share some of those stories with you — insights my friend ChatGPT and I have gathered from the rich history of AI’s evolution.
The roots of artificial intelligence reach back to the dawn of computing itself. Long before “AI” became a buzzword, Alan Turing was already asking whether machines could think. His visionary 1950 paper, Computing Machinery and Intelligence, laid the foundation for everything that followed. Turing’s “Imitation Game” — later known as the Turing Test — was not just a technical proposal, but a philosophical challenge to humanity’s understanding of mind and consciousness.
More than a decade later, another visionary, John McCarthy, coined the term “artificial intelligence” and gave shape to a discipline that would blend mathematics, logic, and imagination. McCarthy envisioned computers that could reason, learn, and communicate — ideas that were revolutionary for the 1950s and remain relevant today.
In my book The Robot, the Human, and the Digital God (Frontline Press, 2024; print only; Macedonian), written with Valentin Neshovski, I explored how these early thinkers paved the way for today’s AI — not only through scientific breakthroughs but through their bold questioning of what it means to be human in a world shared with intelligent machines.
ELIZA: The First Digital Therapist
If Turing and McCarthy dreamed of intelligent machines, Joseph Weizenbaum at MIT in the 1960s gave that dream a voice — quite literally. His program ELIZA, one of the first chatbots ever created, simulated conversation with surprising fluency for its time. Using simple pattern-matching rules and scripted responses, ELIZA played the role of a Rogerian psychotherapist, reflecting users’ own words back at them.
The illusion worked so well that many users began to confide in the machine, believing it truly understood them. Weizenbaum was astonished — and disturbed. His secretary once asked him to leave the room so she could “speak privately” with ELIZA. That moment revealed something profound: people were ready to form emotional connections with machines that did not understand a single word they said.
This phenomenon, later known as the ELIZA Effect, exposed both the promise and peril of human-machine interaction. It showed how easily empathy could be simulated — and how willingly humans would accept it. Weizenbaum, once a pioneer, became one of AI’s most outspoken moral critics, warning that not everything a computer can do should be done.
PARRY: The Paranoid Successor
A few years after ELIZA, another chatbot appeared — one that could be described as ELIZA’s troubled descendant. In 1972, psychiatrist Kenneth Colby developed PARRY, a computer program that simulated a person suffering from paranoid schizophrenia. While ELIZA mirrored emotions neutrally, PARRY expressed fear, suspicion, and anger. It didn’t just repeat phrases; it defended itself, argued, and accused others of plotting against it.
When professional psychiatrists interacted with PARRY, many could not distinguish its responses from those of real patients. In a striking experiment, transcripts of conversations between PARRY and human patients were shown to psychiatrists, who correctly identified which was the computer only about half the time — pure statistical chance.
The unsettling realism of PARRY raised new ethical questions: if a machine could convincingly mimic mental illness, where should the line be drawn between simulation and deception? Unlike ELIZA, which invited empathy, PARRY exposed the darker psychological mirror of AI — how easily code could imitate human suffering, and how fragile the boundary between reality and illusion could become.
SHRDLU: The Machine That Understood a World
By the early 1970s, the dream of machines that could truly understand human language began to take form in a small, digital world of blocks, pyramids, and spheres. At MIT, computer scientist Terry Winograd created SHRDLU, a program that could not only hold a conversation but also act on it — within its simulated environment. When asked, “Pick up the red block and put it on the green cube,” SHRDLU would do exactly that, then describe what it had done.
This was a leap far beyond ELIZA and PARRY. SHRDLU combined natural language processing, logic, and symbolic reasoning — it could ask clarifying questions, remember context, and even explain its own decisions. In many ways, it foreshadowed the reasoning mechanisms behind today’s AI models, which can analyze, respond, and reflect.
Yet SHRDLU’s brilliance also revealed AI’s limitations. It could understand the rules of its tiny world perfectly — but outside that world, it knew nothing. This became a metaphor for the field itself: intelligence that shines within a narrow domain but falters in the vastness of real life. Winograd later turned to philosophy and human-centered design, questioning whether true understanding could ever arise from computation alone.
The Origin of the Name SHRDLU
SHRDLU isn’t an acronym — it’s actually a nonsense word taken from the old days of typesetting.
In manual printing and teletypesetting, the sequence of letters S-H-R-D-L-U represented the most common letters in English arranged by frequency (from an old Linotype keyboard).
Operators used to type “etaoin shrdlu” as a quick filler or error test — similar to typing “asdfgh” on a modern keyboard. Sometimes, these nonsense sequences accidentally made it into printed newspapers!
Why Terry Winograd Chose It?
When Terry Winograd built his 1970 AI program, he playfully borrowed “SHRDLU” as its name. It reflected the mechanical roots of language (from printing and machinery), and the synthetic, constructed nature of his program’s world — a language world built by machines.
So, in short, SHRDLU has no secret meaning — it’s a clever nod to the history of text and machines, symbolizing the bridge between human language and computation.
Funding & Investment in the Early Days
By the mid-1960s, the U.S. Department of Defense (via DARPA, then ARPA) was heavily funding AI research at universities (MIT, Stanford, Carnegie Mellon). For example, in June 1963 MIT received a $2.2 million ARPA grant to fund its Project MAC/AI group; annual sums of around $3 million followed for some labs.
The 1980s saw another major surge: AI went commercial, expert systems became big business. By 1988 the AI industry was reported to be worth billions of dollars. Government-large initiatives include the U.S. Strategic Computing Initiative (1983–1993), reported to spend about $1 billion on computing & AI research.
It is important to note that early funding wasn’t just about curiosity — it was tied to Cold War ambitions, defense, national prestige, just as it was the case with the space race era. Money have flooded research teams and institutes. The money was given with few strings attached. ARPA actually believed in ‘fund people, not projects.’
Author’s note: This series was written by the author with the support of AI-assisted research, language refinement, and editorial dialogue using OpenAI’s ChatGPT. All interpretations, arguments, and conclusions remain the sole responsibility of the author.
Image: AI-generated illustration created with the assistance of OpenAI tools.
Next in the series: The Ideological Abyss — Control vs. Cognition
Copyright notice
© 2025 Jabir Deralla. All rights reserved.
No part of this article may be reproduced, distributed, or republished in full without prior written permission. Short quotations are permitted with proper attribution and a link to the original.
