By Jabir Deralla
Continued from: The Mirror We Built: When Machines Learned to Dream
CIVIL Today, December 20, 2025
If the first chapter of AI was written in curiosity and wonder, the second unfolded in geopolitical tension and military logic in a sharply divided world. As superpowers turned science into strategy, artificial intelligence became a reflection of ideology — a symbol of freedom in the West and of order in the East. The Cold War divided not only nations, but the very definitions of intelligence itself. To the Americans, thinking machines embodied exploration; to the Soviets and Chinese, they promised command. Between these two poles, the dream of AI became both a battlefield and a mirror of humanity’s political soul.
Public perception & hype: Dreams, dollars and disillusionment
During what is often called the “Golden Era” of AI (1956–1973), researchers, governments, and the media were seized by optimism. Predictions abounded. Phrases like “within a generation we’ll have machines with the general intelligence of an average human being” circulated freely, blurring the line between scientific ambition and speculative promise. Public enthusiasm ballooned. AI appeared on magazine covers, in political speeches, and in defence planning — not only as a scientific frontier, but as a strategic asset.
By the early 1970s, however, expectations had begun to outpace reality. Early laboratories had received generous funding, and AI had been framed as a national priority — especially in areas such as machine translation and automated reasoning. The turning point came in 1966, when the Automatic Language Processing Advisory Committee (ALPAC) report in 1966 sharply criticised U.S. machine-translation efforts after roughly $20 million had been spent with little practical return. Funding was cut, programmes were closed, and enthusiasm cooled.
Over the following years, the limitations of early AI systems became increasingly visible. The public mood shifted, and so did that of funders and policymakers. This marked the onset of the first “AI winter” (roughly 1974–1980): a period defined by shrinking budgets, declining institutional interest, and a growing scepticism toward grand technological promises. What had once been framed as imminent and almost magical now appeared distant and uncertain.
The contrast was stark. In 1970, Marvin Minsky — often called the “father of AI” — famously told Life magazine that “within three to eight years we will have a machine with the general intelligence of an average human being.” A decade later, that confidence had given way to caution and retreat. In retrospect, it became clear that the boldness of early public predictions had not only raised hopes — it had also helped create the conditions for disappointment, backlash, and the very funding cuts that froze the field.
Yet this retreat was not meaningless. The first AI winter left a lasting imprint on how we think about artificial intelligence today: the need to manage expectations, to distinguish research from hype, to build patient infrastructure rather than chase spectacle, and to acknowledge limits as part of progress rather than signs of failure.
Soviet and Chinese computing: Control without cognition
Just as the United States and the Soviet Union raced to the Moon, they also raced — more quietly — to build thinking machines. This competition unfolded across several domains: missile defence, surveillance, early-warning systems, cryptography, and, increasingly, computing. Artificial intelligence, however, was not pursued everywhere in the same way, or for the same reasons.
In the Soviet Union, early computing and cybernetics developed primarily as tools of state administration, planning, and situational management (see: Computers in the Soviet Economy, CIA, 1966). While mathematical modelling, control theory, and systems analysis were strongly supported, AI in the Western sense — as an attempt to simulate human cognition — was viewed with suspicion, both philosophically and politically. It remained largely institutional, embedded within ministries and research institutes rather than imagined as a transformative technology in its own right. As the Soviet scientist Germogen Pospelov once remarked, “artificial intelligence in the literal sense … does not exist,” even as research into automation and control systems continued.
China’s trajectory initially followed the Soviet model. In the 1950s and 1960s, Chinese computing developed through technology transfers and institutional guidance from Moscow, with early computers built for state and military purposes. Over time, however, China gradually forged its own path, transforming computing from a technical instrument of governance into a strategic pillar of national development — a process that would much later culminate in its explicitly state-driven AI strategy.
What emerged from this period was not simply a technological divergence, but an ideological one. In the West, AI was considered a quest for general intelligence — an attempt to replicate or rival human cognition. In the Soviet and Chinese worlds, computing was designed to be a means of control: planning, optimisation, prediction, and system management. The contrast was not between progress and stagnation, but between cognition and coordination, between the simulation of mind and the automation of order.
These Cold War approaches did more than shape research agendas; they shaped institutional cultures, expectations, and the socio-political strategies through which technology was governed. In that sense, the current global competition over AI is not a new race, but a continuation — with new tools — of an old divide over what intelligence is for, and whom it should ultimately serve.
The Soviet model: Developing machines for the state control purposes
From the 1970s through the 1980s, the Soviet Academy of Sciences maintained formal research programmes in artificial intelligence, closely tied to cybernetics and systems analysis. Yet these programmes were designed very differently from their Western counterparts. As Olessia Kirtchik has shown in her study “The Soviet Scientific Programme on AI: If a Machine Cannot ‘Think’, Can It ‘Control’?”, Soviet researchers were less interested in replicating human cognition than in developing tools for “situational management” — the optimisation and control of complex social, economic, and technical systems.
The USSR developed early computing machines such as the MESM and BESM series, which fed into defence, space- and missile-related calculations. That shows genuine intent for computing development, not purely espionage. In both cases, the Soviets invested a lot in computing and cybernetics as part of the Cold War rivalry that included race in computing. This sustained investment in computing as a strategic domain of the Cold War has been analysed in detail in historical and military literature, including Quantum Zeitgeist’s article “The Soviet Union’s Early Computers: A Cold War Rivalry in Computing” and Captain Bryan Leese’s “The Cold War Computer Arms Race,” USMCU.
China initially followed a similar path. Beginning in the mid-1950s, Chinese scientists acquired documentation and technical guidance from the Soviet Union — including designs related to machines such as the M-3, M-20, and BESM — and developed their first electronic computer by 1958. This early phase of Chinese computing, as documented in “AI in China: Sketchy Prehistories” published by the Chinese University of Hong Kong Library, laid the foundations for a long-term, state-directed approach to digital technology that would later evolve into China’s contemporary AI strategy.
Despite some claims in the West that portray Soviet and Chinese technological development as mere theft or imitation, this was not simply a story of copying, but of distinct state-driven scientific trajectories.
There was, however, a crucial difference between the West and the East. The Soviet approach differed significantly from the Western idea of “artificial intelligence” as autonomous reasoning or general intelligence. Its emphasis lay instead on the management of large systems: cybernetics, planning, optimisation, and control.
This divergence was reinforced by material and institutional constraints. Technological gaps, component shortages, restrictions on academic freedom, and isolation from international research networks all shaped Soviet and Chinese efforts, making them more constrained and different in character — not merely technically, but culturally and politically as well.
Correcting the record: Ukraine’s underrepresented role
One crucial distortion persists in how the history of Soviet computing and early AI is remembered — the systematic underrepresentation of Ukrainian scientists, engineers, and research institutions. Too often, breakthroughs developed in Kyiv and other Ukrainian centres are retrospectively absorbed into a vague, monolithic notion of “Soviet science,” erasing their geographic, institutional, and human origins. As Benjamin Peters has documented in “How Not to Network a Nation” (MIT Press, 2016), this process was not accidental: Soviet cybernetic and computing achievements were deliberately framed through centralised narratives that obscured their local and national roots.
The MESM computer, created near Kyiv, the cybernetic school led by Viktor Glushkov, and the intellectual groundwork for large-scale computational governance all emerged from Ukrainian laboratories and minds operating at the very forefront of early computing. This was not a peripheral contribution, but a foundational one. Slava Gerovitch’s “From Newspeak to Cyberspeak” (MIT Press, 2002) shows how cybernetics in the Soviet Union evolved through a complex negotiation between ideology, language, and science — a process in which Ukrainian researchers played a central, though often later effaced, role. Correcting this historical imbalance is therefore not an act of national rebranding, but of intellectual honesty.
As the world reassesses the legacies of the Cold War — and as Ukraine fights today for its sovereignty, identity, and place in history — it becomes imperative to restore memory alongside territory. The story of artificial intelligence, like the story of computing itself, cannot be told truthfully without acknowledging Kyiv as one of its earliest and most important points of origin.
Historical examples of Soviet and Chinese computing/AI efforts
In the Soviet Union and China, computing and early artificial intelligence emerged not as independent scientific fields, but as components of broader state projects focused on administration, planning, defence, and social management.
MESM (Soviet Union, ~1950)
The history of computing in the Soviet Union starts with Sergei Aleksandrovich Lebedev and his team in Kyiv who created MESM – the Small Electronic Calculating Machine (Russian: МЭСМ; Малая Электронно-Счетная Машина). MESM became operational in 1950 and is often cited as the first universal electronic digital computer in the Soviet Union. It marked the USSR’s early entry into computing amid the Cold War, showing how computing was tied to defence, scientific, and industrial ambitions. This project also is an evidence that the Soviet Union was indeed building foundational computing infrastructure, contrary to the idea they only “stole technology”.
OGAS (Soviet Union, 1962–1970s)
OGAS (Russian: Общегосударственная автоматизированная система учёта и обработки информации, “ОГАС”, “National Automated System for Computation and Information Processing”) is a very illustrative example on how totalitarian ideology was decisive in defining directions of development in this area. The USSR regime imagined OGAS as a large-scale computing/cybernetic infrastructure, a nationwide information network to control the country’s cybernetic economy that began in 1962, but lacked funding and full implementation.
Soviet AI research programme (Soviet Union, 1970s-1980s)
Within the Soviet Academy of Sciences, a formal “AI” programme emerged focused on “situational management in large complex systems” rather than the Western “thinking machines” model, which shows both the existing of “genuine programmes” in the USSR, as well as the ideological and contextual difference with Western AI concepts – control vs. cognition.
The chapter “Artificial Intelligence with a National Face: American and Soviet Cultural Metaphors for Thought” by Gerovitch (Rodopi, 2011) provides a direct and in-depth look at the cultural and ideological differences that shaped AI research during the Cold War.
The role of cybernetics in the Soviet Union was “a vehicle for technocratic solutionism,” arguing that its adoption during the Leonid Brezhnev era. “The so-called Scientific-Technical Revolution (Научно-техническая революция) served to preserve the Communist Party’s political-economic dominance through technical fixes rather than structural reform,” writes Kenneth Bui in his work “Cybernetic in Form, Conservative in Content: Technocracy and Techno-Solutionism in the Soviet Union” (Stanford University, 2025).
Computing and AI in China (1950s–1980s)
In 1958, the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences built China’s first electronic computer, the Model 103, based on Soviet documentation of the M-3, M-20, and BESM series machines. However, China’s research into artificial intelligence — including intelligent simulation, intelligent computer systems, robotics, and information processing — began only in the late 1970s and 1980s, supported by renewed government backing after years of stagnation.
In regard to AI research, China experienced what has been described as a “Silent Stage” (1950s–1970s). During this period, Chinese institutions maintained a critical or negative stance toward AI, likely influenced by the Soviet Union’s scepticism toward cybernetics in the early decades of the Cold War. As the Chinese scholar Longjun Zhou notes in A Historical Overview of Artificial Intelligence in China, AI was often dismissed as a form of “pseudoscience” or “revisionism,” which significantly curtailed research in the field.
Still, AI-related inquiry was not entirely suppressed. A few sporadic projects appeared during this period, laying modest groundwork for later developments. From the late 1980s onward, China’s AI research began to accelerate dramatically, entering what Zhou identifies as the Development Stage (2000s–2010s). The following decades (2010s to present) are known as the Flourishing Stage, when AI research expanded with remarkable speed and strong state support.
While China’s early progress relied heavily on imported Soviet models and expertise, it gradually developed its own trajectory — one characterized by robust institutional frameworks, strategic state investment, and a vision of AI often aligned with state control, defense, and large-scale governance objectives, mirroring certain features of the Soviet/Russian model.
AI has never been neutral
It is now clear that artificial intelligence has never been a neutral or purely technical project. From its earliest incarnations, it has been shaped by political systems, institutional priorities, and competing visions of order and freedom — not only by what was scientifically possible, but by what was politically desirable.
What we now call the “AI race” is therefore not a sudden phenomenon of the 21st century, but the continuation of a much older struggle over how knowledge is organised, who controls it, and to what ends it is put. Competing models of intelligence reflect competing models of society: openness versus control, autonomy versus coordination, human agency versus systemic management.
In that sense, the ideological abyss of the Cold War has not disappeared — it has merely been digitised.
Author’s note
This series was written by the author with the support of AI-assisted research, language refinement, and editorial dialogue using OpenAI’s ChatGPT. All interpretations, arguments, and conclusions are solely the responsibility of the author.
Image credit
AI-generated illustrations created with the assistance of OpenAI tools.
Next in the series
After the Divide: Intelligence Beyond Ideology
Copyright notice
© 2025 Xhabir M. Deralla (publishing as Jabir Deralla). All rights reserved.
No part of this article may be reproduced, distributed, or republished in full without the prior written consent of the author. Short quotations are permitted with proper attribution and a link to the original source.

