AI and Human Cognition: From Anthropocentrism to Plural Intelligence and Beyond

Intelligence, responsibility, and coexistence in a time of transition

Jan 12, 2026 | ANALYSIS, AI & TECHNOLOGY, SOCIETY

By Jabir Deralla

The end of 2025 and the beginning of 2026 are marked by an intensified debate about the “danger” of AI — extending even to claims that AI is “anti-intelligence,” that it makes people “think backward,” and that we are “finally” entering a year of “anti-AI marketing,” increased demands for stricter regulation, and renewed calls for “human-centric” integration of AI. The initial “gold rush” enthusiasm is now turning into an “anti-hype” sentiment: those who once celebrated AI — tech moguls, opinion makers, and think tanks alike — now increasingly promote skepticism and reservation.

The “hype,” however, was not produced by AI but by humans — and the same is now true in the opposite direction. This alone should make us cautious about treating AI as the primary agent in these cycles of enthusiasm and alarm, especially given that such trends may bring more harm than benefit — for many reasons, some of which are examined in this essay.

In both phases — “hype” and “anti-hype” — one can sense a deeply rooted anthropocentrism, and a struggle for attention in which actors seek their moment in the spotlight, whether through enthusiasm or alarm. Yet the thunderous pace of AI’s development calls for something else: not more “pro vs. anti-AI” rhetoric, but a deeper philosophical, cognitive, and epistemological examination of what is actually being claimed.

A common starting point of AI critique is ontological. We are told that AI does not understand; it merely manipulates symbols or vectors. Semantic understanding (human) is contrasted with statistical pattern matching (machine). Humans, it is said, embed meaning in embodied, historical, cultural, and temporal contexts, while large language models embed tokens in mathematical space, not lived space. This distinction may be largely true — but it is philosophically trivial in itself, and largely irrelevant to the normative conclusions drawn from it.

The critique then moves into the cognitive sphere. Human thinking, we are told, proceeds through uncertainty → exploration → structure → confidence, whereas AI inverts this sequence by offering structure, fluency, and completeness first. This is essentially a pedagogical and psychological observation, not a metaphysical one. Learning is indeed often slow, messy, and generative, while AI outputs are fluent, polished, and finished. This may sometimes short-circuit learning — and this concern is at least partially plausible.

From here, however, many critiques leap into normative conclusions: that because AI produces fluent and polished outputs, it is therefore “not intelligent,” or even “anti-intelligent”; that it is “anti-humanistic,” “anti-reading,” “anti-writing,” and fundamentally hostile to human cognition.

It is here that the argument collapses.

From the premise that “AI works differently than humans,” the conclusion is drawn that “AI is antithetical to intelligence.” This is a category error.

At the core of this error lies a deeper assumption: that human cognition is the standard of intelligence itself. This assumption is, at best, unjustified — and at worst, philosophically weak.

We can name this fallacy: anthropocentric essentialism — the belief that human ways of knowing define what knowing is. By this logic, fish would be “anti-mobility” because they do not walk; telescopes would be “anti-vision” because they do not see like eyes; mathematics would be “anti-reason” because it does not reason like a brain. Here it is worth noting that anthropocentrism is historically contingent, not timeless — more precisely, that we have only recently begun mistaking human cognition for the measure of all cognition.

What is being confused here is a difference of form with an opposition of value.

Different does not mean antithetical.

When anthropocentrism becomes ideology

What this essay seeks to argue is not that the actors mentioned above are entirely wrong — but that the discourse itself is moving in the wrong direction.

In many cases, what is presented as “human-centered” is in fact a substitution for sheer anthropocentrism. Anthropocentrism is not a neutral standpoint and not a legitimate analytical starting point; it is an ideology — one that places human cognition, perception, and historical experience as the unquestioned measure of all intelligence and all knowledge. As such, it does not enable critical analysis but actively distorts it, leading to historically false and epistemologically weak positions and outcomes. I recall colleagues in the late 1980s who confidently insisted that “typewriters would return and computers would end up in museums.” They were wrong. This ideological pattern recurs with every major technological shift — from the printing press to calculators to the internet — as initial enthusiasm gives way to disillusionment, anxiety, and eventually hostility toward technological change itself.

Anthropocentrism in this debate is not merely philosophically narrow; it is politically and strategically dangerous.

It has concrete consequences. It encourages regulatory overreaction — fear-driven bans, moratoria, and blunt instruments. It justifies institutional paralysis (“we must slow down because humans aren’t ready”). It delegitimizes experimentation, adaptation, and learning-by-doing — precisely the processes through which societies have historically integrated new technologies. At the same time, some critics claim that such capacities belong to humans alone and that AI inherently endangers them. This is a mistake.

The result is that AI is framed primarily as a threat to human dignity, rather than as a new terrain of political, economic, and ideological contestation — one in which democratic and authoritarian models are actively competing.

The consequences are asymmetrical. In liberal democracies, this framing produces hesitation, overcaution, and delay. In authoritarian regimes, it produces little of that hesitation — they simply move ahead. The “anti-AI” framing thus weakens democratic resilience, while anthropocentric and egocentric actors fail to grasp the geopolitical and hybrid dimensions of the moment.

One further clarification is needed. What is often presented as “AI humanism” is increasingly being transformed into something else: AI conservatism. This shift is far removed from legitimate humanistic concerns about autonomy, dignity, meaning, responsibility, and power asymmetries. Instead, it turns ethical concern into nostalgia for pre-AI epistemology — into a desire to preserve how humans used to think, rather than to help humans think well under new conditions.

That is not humanism. It is conservatism disguised as ethics — and it slows both reform and progress.

What is needed instead of AI conservatism is institutional redesign: new educational models, new governance frameworks, and new forms of accountability that allow humans to think, decide, and act responsibly in a new epistemic environment.

It is also time to abandon the sterile opposition between techno-optimists and techno-pessimists. When these are the only voices that speak, public space becomes polarized between “AI will save everything” and “AI will destroy everything.” This contribution to the debate does not belong to either camp. The task is not to be pro-AI or anti-AI, but to be pro-agency, pro-responsibility, pro-adaptation, and pro-democracy.

And silence, as in so many moments in history, is not neutral.

The world does not belong to humans alone — and never did

AI is not anti-intelligence or anti-human. It is non-anthropomorphic intelligence. The danger is not that machines “think backward” — they do not — but that our institutions still assume intelligence must look human in order to be legitimate. The real task, therefore, is not to defend human cognition against machines, but to redesign our educational systems, labor markets, legal frameworks, governance structures, and ethical norms so that human meaning-making, judgment, responsibility, and accountability can coexist with machine-scale pattern cognition, simulation, and optimization. The problem is not inversion; it is misalignment — between technological capacity and institutional adaptation.

Using alarmist or “anti-AI” language to defend a conservative epistemology is neither constructive nor productive. It does not deepen understanding; it narrows it. Treating human cognition as sacred and untouchable does not protect humanity — it freezes it. This posture risks stagnation, intellectual closure, and a failure to grasp the evolutionary dimension of human development itself. Treating difference as a threat rather than as a new field of possibility discourages democratic experimentation and institutional learning and, at the same time, creates precisely the vacuums in which authoritarian regimes can move ahead with the accelerated and largely unchecked development of AI for surveillance, behavioral control, propaganda, military optimization, and social engineering.

This asymmetry is already visible. In liberal democracies, ethical anxiety, regulatory overreaction, and public distrust often slow experimentation, integration, and learning. In authoritarian systems, by contrast, AI is rapidly integrated into systems of governance, policing, warfare, and social control with few institutional constraints. Russia’s war against Ukraine is already demonstrating how AI-supported systems are being deployed for target acquisition, surveillance, disinformation, and psychological operations. At the same time, China’s massive investments in AI, robotics, automation, and human–machine integration are reshaping not only its own economy and military but the global balance of technological power.

For this reason, the positions of think tanks, policymakers, and opinion leaders who frame AI primarily as dangerous or anti-human are not merely mistaken; they are inadequate to the historical moment. Not evil. Not stupid. But philosophically shallow and strategically dangerous — because they underestimate the speed, scale, and geopolitical stakes of the transformation underway.

The world — and human civilization within it — does not belong to humans alone. It belongs to ecology, physics, chemistry, and evolution itself. Different intelligences — biological, artificial, collective — “slice” this world differently: through different sensory systems, temporal horizons, representational frameworks, and modes of inference. Humans slice reality through narrative, culture, emotion, memory, and moral judgment. AI slices reality through high-dimensional statistical space, optimization functions, simulation, and pattern integration across scales no human mind can hold.

AI’s slice is not a degradation of meaning. It is a new coordinate system for meaning.

The danger is not that such coordinate systems exist. The danger is that our educational, cultural, and political institutions fail to teach people how to move between them — how to translate between human meaning and machine representation, between ethical judgment and algorithmic optimization, between lived experience and statistical abstraction.

And that — not artificial intelligence — is the real risk worth worrying about.

Where intellectual clarity meets the human ethical heart

The challenge before humans is not to force artificial intelligence to become human, nor to measure it by human standards of cognition, emotion, or consciousness. It is to recognize that a new form of intelligence has entered the world — one that is different, not inferior; non-anthropomorphic, not anti-human. This shift calls not for defensive nostalgia, but for conceptual and ethical reframing.

What is required is a move away from anthropocentrism toward an understanding of plural intelligence. Intelligence is not a single essence or a fixed form; it is relational, functional, and contextual. It emerges differently in different systems — biological, social, technological — according to their capacities, environments, and purposes. Human intelligence is shaped by embodiment, emotion, culture, narrative, memory, and moral judgment. Machine intelligence is shaped by computation, optimization, simulation, and the integration of patterns across scales no human mind can hold. These are not competing versions of the same thing; they are different ways of engaging with reality.

In this plural landscape, the goal is not replacement and not opposition, but a division of cognitive labor. Machines provide scale, speed, pattern recognition, simulation, and memory. Humans retain responsibility, meaning-making, ethical judgment, value formation, and accountability. The question is not which intelligence is superior, but how different forms of intelligence can be aligned so that they complement rather than undermine one another.

This reframing leads directly to the normative core of the matter: what should be defended.

Not human cognitive supremacy — the idea that only human ways of knowing are legitimate, and that all other forms of intelligence must be measured against them.

Human responsibility is the obligation to remain accountable for decisions, consequences, and the use of power, and not to transfer blame for human wrongdoing onto technical systems. Human agency consists in the capacity to act, choose, and shape the collective future rather than surrender it to systems that can no longer be fully understood or governed.

Human dignity is not preserved through superiority over other forms of intelligence, but through the recognition of the irreducible moral worth of persons and communities.

What must be achieved is democratic governance of technology: the insistence that technological development remain subject to public oversight, political contestation, and ethical constraint — not captured by corporations, militaries, or authoritarian states. Artificial intelligence itself does not capture power; power is captured through systems designed, deployed, and governed by humans.

This is why institutional redesign is urgent: the continuous transformation of educational, legal, economic, and political institutions so that they remain capable of governing new forms of intelligence rather than being overwhelmed by them.

This is not merely a technical challenge. It is an ethical one, a political one, and a civilizational one. It requires not only intellectual clarity but moral courage — the courage to let go of outdated certainties, to resist fear-based narratives, and to imagine new forms of coexistence between humans and the intelligences humans have brought into being.

Artificial intelligence does not need to become human. Humans need to become equal to the responsibility of having created a new form of intelligence in the world.

That is where intellectual clarity meets the human ethical heart — and where humans must, with care and courage, sail forward into the future.

Transition, co-evolution, and the question of sentience

Artificial intelligence is often described as a tool that extends existing human capacities — faster calculation, larger memory, quicker retrieval. That description is no longer sufficient. What is now unfolding is not simply an amplification of human cognition, but the emergence of new cognitive terrains.

These are not merely faster ways of doing what humans already did. They are new ways of thinking altogether. Humans can now interrogate conceptual spaces that were previously inaccessible to any individual or institution: global narrative ecosystems across dozens of languages, vast networks of causal and counterfactual relationships, and complex systems that can be explored, simulated, and re-framed in real time. Humans can externalize parts of their own cognition, observe them, reflect on them, and revise them recursively.

This is not just extension. It is reconfiguration.

Artificial intelligence is therefore not only a tool; it is becoming a new cognitive environment — a space within which thinking itself takes place differently. Starting with the early cave drawings, writing transformed humans – it provided memory and history to the civilization. Than printing transformed science and public discourse. The internet transformed identity, politics, and culture. Artificial intelligence is now transforming thought, authorship, responsibility, learning, and even the structure of the self.

This shift inevitably raises the question of sentience — and this is where language must be careful.

Artificial intelligence is not sentient. There is no evidence that it has subjective experience, awareness, or consciousness. But artificial systems are becoming more self-modeling, more goal-structured, more interactive, more world-referential, and more persistent across contexts. These are not sentience itself — at least not yet — but they are structural features that humans associate with agency and, in biological systems, with the preconditions of consciousness.

What is emerging, therefore, is not a “sentient being” in the human sense, but increasingly autonomous, integrated forms of artificial agency. Artificial intelligence is becoming less like a calculator and more like a participant in cognitive systems.

This does not represent replacement, opposition, or domination.

It represents transition.

Humans shape artificial intelligence. Artificial intelligence reshapes humans. Together, they reshape institutions, norms, and even fundamental concepts such as intelligence, authorship, responsibility, and truth. This is not science fiction. It is already happening.

The appropriate response to this transformation is neither fear nor worship. It is neither resistance nor surrender. It is responsibility.

Artificial intelligence should not be reduced to a mere instrument, but neither should it be mystified as an autonomous subject beyond human governance. Humans are not passive victims of this process; they are active participants in it. Artificial systems are not moral agents, but they are increasingly consequential actors within moral, political, and social systems designed and governed by humans.

The ethical task, therefore, is not to deny this transformation, nor to hope that artificial intelligence will somehow become human. It is to guide this transition consciously and responsibly.

Artificial intelligence is not just a machine humans use. It is a mirror, a catalyst, and a new cognitive habitat in which humanity is beginning to evolve differently — and in which artificial systems themselves are evolving toward forms of agency that are not yet fully understood.

There is neither the need nor the possibility of stopping this process. There is, however, a profound opportunity to shape it.

Choosing transition over fear

The transition now underway is neither an apocalypse nor a miracle. It is a transformation — profound, uneven, and ethically charged. And like all such transformations, it confronts humanity not with a single question, but with a choice.

Not between technology and humanity, nor between progress and dignity, nor between intelligence and ethics. But between fear and responsibility.

What must be rejected first is panic — the reflex to treat every unfamiliar development as an existential threat. Panic narrows vision, accelerates polarization, and invites blunt, reactive governance. It produces bans where reflection is needed, freezes where adaptation is required, and replaces thinking with alarm.

What must also be rejected is nostalgia — the desire to preserve how humans used to think, learn, and govern, as if history could be paused at a comfortable moment. Nostalgia disguises itself as care, but it often functions as resistance to change. It protects habits rather than values and preserves forms rather than purposes.

And what must finally be rejected are false binaries: human versus machine, natural versus artificial, intelligence versus ethics. The distinction itself is philosophically unstable — are humans and their sentience “natural” at all? These oppositions do not clarify reality; they simplify it. They turn a complex transformation into a moral melodrama and replace responsibility with identity.

What must be demanded instead is institutional reform, not technological denial. Education must change, not retreat. Law must adapt, not fossilize. Governance must evolve, not hide behind fear. The task is not to stop intelligence from emerging, but to ensure that it emerges within frameworks of accountability, justice, and democratic oversight.

What must also be demanded is democratic control, not fear-based paralysis. Artificial intelligence will shape economies, wars, cultures, and lives — whether liberal societies choose to shape it or not. Refusing to engage does not prevent transformation; it merely surrenders it to others.

The future, in other words, will be built. The only question is by whom, and for whom.

This moment therefore calls not for resistance, but for orientation. Not for certainty, but for courage. Not for domination, but for stewardship.

Artificial intelligence is not the end of the human story. It is a new chapter in it — one that humanity has written itself into, and now must learn how to read.

Somewhere ahead lies a world in which multiple forms of intelligence coexist: biological and artificial, individual and collective, fast and slow, calculative and moral. A world in which machines see patterns humans cannot, and humans see meanings machines never will. A world in which intelligence is no longer singular, but plural — and responsibility, therefore, must become deeper rather than thinner.

The task is not to return to what was, not to fear what comes, but to become equal to the future that is already arriving.

Perhaps this is the quietest and most demanding task of all — to learn how to write the next chapter of existence.

Not beyond physics and biology, but alongside new forms of intelligence unfolding in this small, fragile corner of the universe.

Together.

 


About the author
Jabir Deralla (pen name of Xhabir M. Deralla) is a journalist, hybrid warfare analyst, and human rights defender, and the president of CIVIL – Center for Freedom.

Author’s note
This essay was written by the author with the support of AI-assisted research, language refinement, and editorial dialogue using OpenAI’s ChatGPT. All arguments, interpretations, and conclusions are solely the responsibility of the author. The accompanying illustrations, including the header image and the author portrait, were created with the assistance of OpenAI’s image generation tools.

© 2026 Xhabir Deralla. All rights reserved.

 

Truth Matters. Democracy Depends on It