Call for Papers, 4th issue: Human Intelligence and Artificial Intelligence
Aion. Journal of Philosophy & Science invites submissions for an issue dedicated to the philosophical exploration of the theme Human Intelligence and Artificial Intelligence (AI).
The emergence of AI is not merely a technological event; it is a transformation of the conditions under which intelligence appears and operates. For the first time, we are facing artifacts that seem to be intelligent, capable of solving problems, learning, deciding, creating, reasoning, thinking. For the first time, we face artefacts that do not simply extend human capacities but seem to rival them, displace them, or reconfigure them.
Intelligence — long considered a crucial trait of the human — now becomes a property of systems that are neither biological nor conscious, neither embodied nor mortal. This raises questions that cut across metaphysics, ethics, epistemology, political philosophy, and the philosophy of mind. Questions that challenge the inherited distinctions between the natural and the artificial, the human and the technical, the organic and the computational.
This issue aims to bring together diverse perspectives that illuminate the conceptual, epistemological, ethical, political, artistical and metaphysical questions raised by the problematic coexistence — and possible conflict or convergence — of human and artificial forms of intelligence.
These questions do not belong to any single tradition. They resonate across the history of philosophy and its various lines — from continental to Anglo-American thought — yet they demand the invention of new conceptual tools capable of addressing the unprecedentedtransformations introduced by AI into the human–technical relation. We believe that it is therefore intellectually fruitful to turn to thinkers who are not traditionally invoked in debates on human and artificial intelligence, but whose thinking can open unexpected routes.
That is the case of Gilbert Simondon who, in the closing pages of his leading book On the Mode of Existence of Technical Objects, proposes to rethink the relation between technology and labour in a way that directly addresses one of the major problems caused by AI: work. He observed that technology has long been understood through the lens of men’s need of labour. Condemned to work in order to live, humans tended to judge technical objects as mere instruments or products of labour. Yet, Simondon urged us to invert this schema: rather than deriving technical objects from the category of work, we should recognize work as merely one phase within the more fundamental process of technicity.
With the rise of AI — whose accelerating development seems to unsettle the role of work in human destiny — maybe this Simondon’s insight demands a renewed attention. AI does not merely substitute much of human work. It does not simply automate our traditional working practices. It destabilizes the economic coordinates through which we have historically defined labour, understood its political meaning and even measured its anthropological role in human life. In this sense, Simondon’s insight may even become prophetic. If work ceases to be the axis of human existence, what takes its place? And what becomes of the human when technicity individuates itself beyond us?
Giorgio Agamben touches another central problem of AI. In a recent essay, On Artificial Intelligence and Natural Stupidity, he notes that the very expression “Artificial Intelligence” conceals, through the opacity of its name, a fundamental metaphysical problem. The issue is not that intelligence might be, in itself, “artificial”— for intelligence, inseparable as it is from language, has always involved techne, an element of artifice. The disturbing novelty lies elsewhere. AI appears as an intelligence situated outside the mind of the subject who thinks, or who ought to think. Agamben finds here an unexpected resonance with the notion of the “separate intellect” proposed in the twelfth century by the Arabic philosopher Averroes. For Averroes, the question was how individual human beings could relate to an intellect that is, by definition, separate from them. His answer was that individuals connect to thought through imagination, which remains irreducibly singular.
Transposed to our present, the question becomes: how can someone relate to an intelligence that claims to think outside them? Averrois still had at his disposal the bridge of imagination. But today, when intelligence is externalized, automated, algorithmic and increasingly opaque, what bridges remain? How does a subject who increasingly delegates or abandons the task of thinking relate to an intelligence that claims to think in his / her place? And how, might such a relation be established, can a subject do that without abdicating the very exercise of thought? Finally, what would it mean for humanity be confronted with its own no longer thinking?
This displacement is not merely a technical phenomenon; it is an anthropological and metaphysical rupture. Indeed, it could also be asked: are we witnessing the emergence of a new figure of the human — one whose capacities are extended, displaced, or overshadowed by machinic intelligence? Or, instead, are we confronting a crisis of the human, in which the very faculties that once defined us are being outsourced to systems that we neither fully understand nor fully control?
The question is not whether AI will produce a “superhuman” intelligence, but whether humans themselves will remain capable of thinking in the presence of such systems. What forms of subjectivity, community, and identity become possible — or impossible — when intelligence is no longer anchored in the living being who thinks?
Bernard Stiegler’s position on this matter is both peculiar and nuanced. He argues that the externalization of intelligence, memory and cognition— now accelerated by AI — risks producing a “proletarianization of the mind,” a mutilation of human intellectual capacities. However, because he claims that technology is the condition of the possibility of human becoming, his major question is not whether AI threatens the human, but what new forms of individuation might emerge when intelligence is algorithmically externalized.
Donna Haraway, approaching the question from yet another angle, invites us to abandon the fantasy of a pure, autonomous human subject. Her cyborg is not a prophecy of machines replacing humans, but a reminder that we have always been hybrid beings, constituted through networks of biological, technical, and symbolic relations. AI just intensifies this hybridity, forcing us to reconsider the boundaries of embodiment, action or responsibility.
In a very different but related field of inquiry – one he effectively inaugurated under the name of philosophy of information - Luciano Floridi, yet following a similar direction, argues that we already inhabit an infosphere in which humans and artificial agents coexist as informational organisms. By contrast, Daniel Andler, in his recent book Intelligence Artificielle, Intelligence Humaine: la Double Énigme (2023), working at the intersection of cognitive science and philosophy, while acknowledging that computational models have significantly advanced our understanding of certain cognitive processes, ends up defending that, unlike AI, human intelligence cannot be reduced to merely computation. It is situated, normative, and historically formed. In his view, the real danger is not that those machines will become too intelligent, but that humans may come to misunderstand their own intelligence by modelling it too closely on machines.
We thus welcome original contributions addressing, but not limited to, the following themes:
- What is truly at stake when intelligence is externalized, when it becomes automated and autonomous? What becomes of intelligence when cognitive functions are externalized, automated, or surpassed?
- Can we still speak of intelligence in the singular, or must we acknowledge a plurality of intelligences—organic, artificial, distributed, collective?
- How to conceptualize work, action, responsibility, in a world where thinking is distributed across human and artificial systems?
- If intelligence is distributed across human and non-human actors, how might we rethink the political framework of a community?
- What does it mean for us to think in the presence of AI systems that increasingly think in our place?
- Does AI expose the fragility of the human or does it announce a new figure of the human - augmented, hybrid, post-biological?
The debate on human and artificial intelligence has also developed along several other directions. Other important questions may be - and are indeed being - addressed, such as:
Our conception of what it means to think has always assumed that this task was the privilege of a biological body; that only a body that lives and dies is capable of thinking. But, what does it mean to think without a body? The human body is porous, desiring, alive; inhabited by other beings, crossed by rhythms, bacteria and cosmic dust. It grows and decays, it transforms, it becomes plant, animal, mineral. By contrast, the body of the machine is rigid, metallic, finite, with no metabolism, no eros, no death. Sloterdijk would say it lacks a “sphere”: the capacity to co-inhabit, to breathe with others, to form shared worlds. How does the absence of such an embodiment limit the machine? What kind of thinking is possible in the absence of this kind of body? How can a machine think if it does not metabolize the world?
Do machines really think, or do they merely calculate? Within the Anglo-American philosophical tradition, a central debate has unfolded between functionalists, such as Hilary Putnam and Daniel Dennett, who argued that mental states may be characterized by their functional roles rather than by their physical substrate, thus suggesting that artificial systems could, in principle, instantiate genuine intelligence, and critics like John Searle, who contended that computation alone cannot generate meaning or intentionality. In these circumstances, how can a Machine that only manipulates symbols ever grasp meaning? How can it be said to understand?
Yet, even if coming from a very different intellectual lineage, Hannah Arendt would find herself unexpectedly close to Searle when she reminds that thinking is not the execution of rules but their interruption, their suspension, their fracture: a pause, an indecision, a silent dialogue with oneself. Can a machine interrupt itself? Can it hesitate? Can it question whether it should obey or refuse the very rule it is designed to follow? And if it cannot, what kind of “thinking” is this that operates without doubt, without silence, without the inner fracture that makes thought possible?
These debates extend into epistemology and the philosophy of mind. Can machines know? If knowledge requires reasons, can a neural network provide them? David Chalmers has recently explored whether large-scale language models might instantiate forms of “proto-understanding,” while others argue that machine learning involves no reasoning at all, only pattern extraction.
Further questions concern metaphysics and ethics: personhood, identity, self-awareness, autonomy, the future of human–machine relations. Could an artificial system ever count as a person? If not, why not? If so, what ethical and political implications would follow? Can responsibility be distributed across human and non-human agents? These questions shape contemporary discussions about the future of human–machine relations and the conceptual frameworks required to understand them.
Creation raises yet another crucial issue. Do machines create, or do they only recombine? Indeed, they do generate images, texts, melodies, but their creation is merely combinatorial, indifferent to time, memory and death. They can imitate style with an astonishing precision, but can they inherit a tradition? As Deleuze would say, can they betray that inheritance, push it beyond itself, make it become other? Can they force a tradition to mutate? Can they create something that is not already contained, implicitly or explicitly, in the data? Can they produce differences that did not pre-exist? Or do they merely traverse a space of possibilities already mapped by their data, however vast that space may be?
***
This issue invites contributions that engage these — and other — questions with philosophical rigor and imaginative depth. The aim is to revisit classical questions — about intelligence, mind, knowledge, embodiment, technicity, ethics, politics, the future of human existence — under new conditions. Above all, it seeks to rediscover what human intelligence might mean in an age of artificial intelligences.
Maybe AI challenges us to clarify both the kind of intelligence we attribute to machines as well as the kind of intelligence we wish to cultivate in ourselves.
The editors