AI inside
A poet for more than fifty years and a professional translator–interpreter for over four decades, I have produced millions of words while always searching for the “right word”: the one that names, designates, commits, exposes, obliges. Contemporary artefactual systems can generate words without limit; what matters to me is simply to recall that meaning begins where someone accepts responsibility for it. It is by that responsibility — and by that alone — that I continue to measure what it means to write.
*
For some time now, a sense of unease has been growing around what we habitually call “artificial intelligence.” The term is everywhere, yet the more it spreads, the more it obscures what it claims to name. Not only does it suggest a misleading analogy with human intelligence, it also sustains an ontological confusion: the tendency to attribute to technical devices an existence, an intention, or a responsibility they do not possess. [1]
It is time to correct this confusion. Not through a mere terminological game, but because words orient thought: to name is already to institute a regime of understanding, and thus a politics of concepts. [2] AI should not stand for artificial intelligence, but for artefactual intelligence.
The Misleading Weight of the Word “Artificial”
In ordinary usage, artificial designates what is fake, imitated, simulated, even deceptive. An artificial flower is not a flower; an artificial smile is not a smile. Applied to intelligence, the term suggests either a counterfeit of human intelligence, a degraded version of it, or—conversely—a rival intelligence, unsettling precisely because it is assumed to be autonomous. These projections belong to an imaginary of the machine-as-subject, more narrative than conceptual. [3]
Artefact: Returning to the Original Meaning
The word artefact (from the Latin arte factum, “made by art”) shifts the perspective. It does not denote an illusion, but a reality that is technically produced. An artefact is neither natural nor living, yet it is real in its effects, and it is at this level—functioning and mediation—that it must be understood. [4] To speak of artefactual intelligence is therefore to designate a form of intelligibility produced by artifice, without attributing to it interiority, a proper end, or responsibility. [5]
Producing Language Is Not Producing Meaning
Language models excel at a specific task: exploring a vast space of possible formulations. They produce statements that are grammatically valid, stylistically plausible, often remarkably pertinent. But formal correctness is not truth, and plausibility is not meaning. [6]
This is an old point. In Aristotle, words are conventional (kata synthêkên) and are neither true nor false when taken in isolation; truth and falsity appear at the level of judgment—affirmation and negation, composition and division. [7] In other words, one can produce correct sentences without producing truth. From this follows a dissymmetry: the production of language can be mechanized, whereas the production of meaning requires an orientation, a “point of arrest” at which an instance assumes what is being said. [8]
Dialogue Without an Interlocutor
The disturbance arises from the fact that we are now confronted with a device capable of sustaining the form of dialogue. For a long time, dialogue implied the existence of an interlocutor—a presence, an exposure, a shared world. Here we encounter speech without a world of its own, speech that carries on a conversation without assuming what it utters. This brings into focus the difference between dialogue as a form and dialogue as a relation, that is, as an exchange between situated beings. [9]
This is not merely a question of truth. Plato had already shown how the success of a discourse can be measured by its persuasive effectiveness rather than by its relation to truth—and why this entails a politics of speech. [10] By returning to sophistic texts (notably Gorgias), Barbara Cassin displaced the simplistic opposition between “truth” and “deception”: discourse also has a power of transformation and effect, independently of any guarantee of truth. [11] Large language models (LLMs) make this power of discursive effect visible once again, while detaching it even more radically from any form of responsibility.
Execution Without Exposure
In this respect, the analogy with work is illuminating. An entity capable of executing indefinitely no longer truly belongs to the regime of human work: it has neither fatigue, nor conflict, nor strike, nor existential cost. Transposed to language: an entity capable of producing statements indefinitely no longer belongs to the regime of human language, if language is understood as a situated, risky, exposed act rather than as mere formal performance. [12]
As Austin famously put it, to speak is not only to describe, but to act—and action involves conditions, responsibilities, and consequences. [13] Artefactual intelligence executes without exposure: it bears neither the moral, political, nor symbolic cost of what it “produces.” Responsibility therefore shifts to the one who orients, validates, publishes, and assumes. [8][14]
A Necessary Clarification
Speaking of artefactual intelligence is not a refusal to use these devices, but a way of thinking them correctly, so as not to attribute to them what remains irreducibly human: meaning, responsibility, existence. In a world saturated with language, where words can be produced without cost or risk, responsibility for meaning becomes rarer, more demanding, and more precious. It cannot be delegated: it always presupposes someone willing to say this text, in this form, and not otherwise.
To name AI correctly is therefore not a mere lexical exercise. It is a gesture of responsibility. As long as we speak of “artificial intelligence,” we sustain the temptation to shift onto the machine what still belongs to human decision: meaning, orientation, imputability. By contrast, speaking of artefactual intelligence forces us to acknowledge that the machine merely executes, transforms, recombines—without ever assuming what it makes possible. The right word thus prevents a silent abdication: it reminds us that responsibility does not travel with technical means.
This shift has an even deeper consequence for how we think about human intelligence itself. For a long time, human intelligence was defined by its capacity to produce: to produce works, discourses, knowledge, solutions. In a world where linguistic production can be automated without limit, this definition becomes insufficient. What now distinguishes human intelligence is no longer the quantity, nor even the quality, of what it can produce, but the capacity to decide what deserves to be produced, said, published, and assumed.
To assume here does not simply mean to sign or to claim formal authorship. It means to accept exposure—to the symbolic, political, and ethical consequences of what is formulated. Where artefactual intelligence produces without risk, human intelligence defines itself by its capacity to take that risk, to make a word exist as commitment rather than as mere performance. It is in this asymmetry—between execution without exposure and exposed decision—that the most decisive boundary is being redrawn today.
Understood in this way, coexistence with artefactual intelligence leads neither to the erasure of the human nor to its nostalgic glorification. It instead compels a demanding clarification: what cannot be delegated is not the production of language, but responsibility for meaning. In a world saturated with possible utterances, human intelligence is recognized less by its generative power than by its capacity for restraint, selection, and assumption. It no longer consists primarily in doing, but in responding—to what is said, and to what saying it makes possible.
*
Bibliographic References (indicative)
[1] Gilbert Simondon, On the Mode of Existence of Technical Objects; Bruno Latour, We Have Never Been Modern.
[2] Michel Foucault, The Archaeology of Knowledge; Ludwig Wittgenstein, Philosophical Investigations.
[3] Critiques of the “machine-subject” imaginary in philosophy of technology and science fiction studies.
[4] Simondon, On the Mode of Existence of Technical Objects.
[5] John Searle, writings on intentionality and “as-if” semantics.
[6] Émile Benveniste, Problems in General Linguistics.
[7] Aristotle, De Interpretatione, ch. 1 and 4–6; Metaphysics Θ, 10.
[8] Paul Ricœur, Oneself as Another (responsibility, imputation).
[9] Benveniste (enunciation); Mikhail Bakhtin (dialogism).
[10] Plato, Gorgias; Phaedrus.
[11] Barbara Cassin, The Sophistic Effect; Gorgias, Encomium of Helen.
[12] Hannah Arendt, The Human Condition.
[13] J. L. Austin, How to Do Things with Words.
[14] Ricœur; also Arendt on responsibility and action.
[1] Gilbert Simondon, On the Mode of Existence of Technical Objects; Bruno Latour, We Have Never Been Modern.
[2] Michel Foucault, The Archaeology of Knowledge; Ludwig Wittgenstein, Philosophical Investigations.
[3] Critiques of the “machine-subject” imaginary in philosophy of technology and science fiction studies.
[4] Simondon, On the Mode of Existence of Technical Objects.
[5] John Searle, writings on intentionality and “as-if” semantics.
[6] Émile Benveniste, Problems in General Linguistics.
[7] Aristotle, De Interpretatione, ch. 1 and 4–6; Metaphysics Θ, 10.
[8] Paul Ricœur, Oneself as Another (responsibility, imputation).
[9] Benveniste (enunciation); Mikhail Bakhtin (dialogism).
[10] Plato, Gorgias; Phaedrus.
[11] Barbara Cassin, The Sophistic Effect; Gorgias, Encomium of Helen.
[12] Hannah Arendt, The Human Condition.
[13] J. L. Austin, How to Do Things with Words.
[14] Ricœur; also Arendt on responsibility and action.

Aucun commentaire:
Enregistrer un commentaire