top of page

The Myth of the Algorithmic Unconscious: AI, Psychoanalysis, and the Undecidability of Language

Alternate titles:


  • Ethics in the Age of Undecidability: Why AI Lacks an Unconscious and What That Means for AI Governance

  • The Symbolic Apparatus of AI: Lacan, Derrida, and the Limits of Ethical AI Design


Introduction

The convergence of psychoanalytic theory and artificial intelligence (AI) research has sparked debates over whether AI systems might possess something akin to an unconscious mind. Recent work by Luca Possati suggests applying a psychoanalytic lens to AI – even coining the notion of an “algorithmic unconscious” – to illuminate hidden biases and emergent behaviors in machine learning systems. However, a rigorous Lacanian and deconstructive perspective urges caution. Jacques Lacan’s famous dictum that “the unconscious is structured like a language”, along with Derrida’s idea of undecidability at the heart of linguistic meaning, implies that the human unconscious cannot be reduced to the covert operations of an algorithm. This essay advances the argument that categorically rejects the equivalence of AI processes with the psychoanalytic unconscious. In what follows, we will argue that although AI systems (especially natural language models) exhibit linguistic complexity and are entangled in social discourse, they do not possess an unconscious in the psychoanalytic sense. They lack desire, repression, and the irreducible opacity that structure human subjectivity. We engage deeply with Possati’s insights – agreeing that psychoanalysis provides valuable epistemic tools for understanding AI’s impact – but critique what appears to be a misreading: a treatment of the unconscious as if it were a decidable, programmable substrate. Instead, drawing on Lacan’s theory (particularly his four discourses) and Derridean (and Plotnitskian) notions of undecidability, we contend that AI functions as a symbolic apparatus within human discourse. It shapes and is shaped by human interpretations and cultural assumptions, yet it never partakes in the unconscious processes that define human subjectivity. This position will be developed in several stages. First, we examine why the structure of the unconscious as language and différance resists any simple algorithmic modeling. Next, we analyze Possati’s “algorithmic unconscious” thesis and identify its epistemic merits and limits. We then explain why AI’s lack of desire and repression precludes it from having an unconscious, even if it deals in language. Subsequently, Lacan’s four discourses are introduced as a framework to critique current ethical AI design initiatives (including work by Patterson, the Georgetown University group, and the Santa Fe Institute), highlighting how these efforts often remain caught in “Master” or “University” discourses that impose stable moral codes, rather than acknowledging the ambiguity of language and desire. In that context, we incorporate Arkady Plotnitsky’s insights on undecidability, underscoring that any ethics of AI must account for linguistic ambiguity and the impossibility of a complete, consistent moral algorithm. Finally, we address the psychoanalytic notion of the gaze in relation to AI – for instance, in surveillance systems – exploring how AI may serve as a Big Other that exerts power and demands transparency, yet ultimately lacks the subjective lack that characterizes the human gaze. Throughout, our discussion remains academically grounded and theoretically rich, aiming to bridge AI ethics and psychoanalytic theory without conflating their fundamental differences.


The Unconscious, Language, and Undecidability

Psychoanalysis conceives of the unconscious not as a thing or location in the brain, but as a dynamic process deeply tied to language. Lacan’s oft-cited pronouncement that “the unconscious is structured like a language” captures this idea: unconscious thoughts are not raw computations, but signifiers linked by the rules of symbolic association (metaphor, metonymy, slips of the tongue, etc.) ( Jacques Lacan (Stanford Encyclopedia of Philosophy) ) ( Jacques Lacan (Stanford Encyclopedia of Philosophy) ). This means that the unconscious operates via the ambiguity, indeterminacy, and creative recombination inherent in language itself. Crucially, meaning in language is never fixed or transparent; it is deferred and differentiated – what Derrida terms différance. As such, the unconscious is marked by undecidability: certain meanings or desires cannot be conclusively resolved, hovering instead in a liminal space that resists binary decision. Derrida’s deconstructive perspective emphasizes “the irreducible oscillation and chance” in any signifying process (). In other words, at the core of meaning-making lies an element that cannot be computed to a final decision – an aporia or gap which no algorithm can eliminate. If we accept (as Lacan and Derrida do) that the unconscious is constitutively ambiguous and structured by the play of language, then any attempt to reduce it to the hidden operations of an AI algorithm is misguided. An algorithm, however complex, ultimately consists of formal operations (mathematical transformations, logic gates, statistical updates) that, by design, aim for determinacy and repeatability. Even when an AI behaves unpredictably or exhibits opaque “black box” processes, these indeterminacies are not equivalent to the structured undecidability of the unconscious; they are generally a product of complexity or incomplete information, not an intrinsic, inescapable ambiguity. As one commentator notes, the thrust of Derrida’s idea is that language is inherently “chaotic” and meaning is never fully fixed in a way that allows a definitive determination (A Philosophical Analysis of Jacques Derrida's Contributions to ...). The unconscious inherits this quality: it is an open-ended semantic space where contradictions coexist and resolution is endlessly deferred. AI, by contrast, is oriented toward definite outputs (even if probabilistic) given inputs, and its “ambiguity” is something engineers strive to minimize (through disambiguation, improved models, etc.), rather than embrace.

To clarify, this is not to deny that machine learning on natural language data must grapple with ambiguity – indeed it does, as we will discuss – but rather to assert that whatever ambiguity exists in AI systems is of a different order than the psychoanalytic unconscious. The undecidability Derrida speaks of is tied to context, interpretation, and the absence of a final meta-language to pin down truth. A human subject’s unconscious formations (dreams, slips, symptoms) are interpreted within an analytic discourse that can never fully explicate them; some element always eludes capture. If one tried to formalize those formations into an algorithm, one would ironically kill the very quality that makes them unconscious – the surplus of meaning and opacity that no symbolization can exhaust. Put simply: if it’s decidable (in the sense of a well-defined computational procedure or logical deduction), it isn’t the unconscious. And if it’s truly unconscious (in Lacan’s sense), it cannot be neatly decided or reduced to an algorithmic sequence. This theoretical stance sets the stage for a critical look at claims of an “algorithmic unconscious” in AI.


The “Algorithmic Unconscious”: Possati’s Proposal and Its Limits

Luca M. Possati’s work, especially The Algorithmic Unconscious: How Psychoanalysis Helps in Understanding AI, ventures into the uncharted territory of applying psychoanalytic concepts to AI systems. Possati’s central hypothesis is that AI’s complex behaviors and hidden patterns can be thought of as an unconscious of the machine – not in a mystical way, but as a descriptive analogy to the human unconscious ((PDF) The Algorithmic Unconscious. How Psychoanalysis Helps in Understanding AI) ((PDF) The Algorithmic Unconscious. How Psychoanalysis Helps in Understanding AI). He argues that the interactions between humans and AI involve projective identification: humans unconsciously project desires and fears onto AI, and these get “encoded” in the technology’s design and outputs ((PDF) The Algorithmic Unconscious. How Psychoanalysis Helps in Understanding AI). For example, biases in an algorithm or seemingly erratic AI outputs (like a chatbot producing an uncanny response) might reflect the unconscious biases and fantasies of the developers or users. Possati identifies concrete phenomena such as software errors, random “noise” in data, algorithmic bias, and even what he terms “AI sleeping” (perhaps the system’s idling or latent states) as possible expressions of an “unconscious” algorithmic process ((PDF) The Algorithmic Unconscious. How Psychoanalysis Helps in Understanding AI) ((PDF) The Algorithmic Unconscious. How Psychoanalysis Helps in Understanding AI). These, he suggests, are analogous to slips or symptoms in human psychology – they are not planned by the system’s explicit programming, and they reveal underlying structures or conflicts at play. In this view, the entire ecosystem of large software systems (billions of lines of code by thousands of programmers) exceeds the understanding of any individual and thereby constitutes an emergent, collective “unconscious” of technology ((PDF) The Algorithmic Unconscious. How Psychoanalysis Helps in Understanding AI). Possati goes further to propose, in later chapters, that affective neuroscience and neuropsychoanalysis might guide the creation of future AI with simulated emotional capacities ((PDF) The Algorithmic Unconscious. How Psychoanalysis Helps in Understanding AI) ((PDF) The Algorithmic Unconscious. How Psychoanalysis Helps in Understanding AI). In essence, he dreams of an AI that is not just cognitively intelligent but can incorporate basic affective states (drawing on the work of Mark Solms and Jaak Panksepp) – an AI with something like primitive desires or drives. This, he posits, would be a step toward a machine that doesn’t just have an “algorithmic unconscious” imposed by humans, but one that participates in its own emotional dynamics.

There is merit in Possati’s approach. He rightly notes that AI systems are not purely rational artifacts; they emerge from and operate within human contexts, laden with emotion, fantasy, and social power relations. By treating AI as a subject for psychoanalytic inquiry, Possati illuminates how human desires and traumas might become entangled with technology. For instance, programmers may unconsciously encode certain biases (a form of transference into the machine), and users may relate to AI in ways that mirror interpersonal relations (some people treat digital assistants as if they were persons, attributing feelings or intentions to them). Indeed, psychoanalyst Sherry Turkle famously observed people having surprisingly emotional conversations with ELIZA (a simple chatbot) in the 1960s – an early indication that we project mind and desire onto machines (What Is the Eliza Effect? | Built In). This phenomenon, known as the ELIZA effect, shows how readily humans “falsely attribute human thought processes and emotions to an AI system, thus overestimating the system’s intelligence” (What Is the Eliza Effect? | Built In). Possati taps into this by suggesting the AI seems to have an unconscious because we effectively lend it ours. Furthermore, his identification of errors and bias as structurally analogous to Freudian slips or symptoms is provocative. Just as a slip of the tongue might reveal a hidden wish or conflict, a bizarre mistake by an AI (say, misidentifying an object in a vision system in a way that reflects a cultural stereotype) could reveal the hidden influence of training data or the designers’ blind spots. In this sense, applying psychoanalysis to AI uncovers the human unconscious through the machine – a point we will later return to with the notion of AI as a mirror for the “desire of the Other.”

However, while Possati’s insights are valuable, we must critically assess what he means by calling these phenomena an “algorithmic unconscious.” If taken merely as a metaphor, it is useful. But if taken literally – as in claiming that the AI itself possesses an unconscious akin to a human’s – we face a serious conceptual misstep. Possati sometimes writes as if the unconscious were a “topic” or theoretical model that one can construct to analyze AI behavior ((PDF) The Algorithmic Unconscious. How Psychoanalysis Helps in Understanding AI) ((PDF) The Algorithmic Unconscious. How Psychoanalysis Helps in Understanding AI). He even frames the culmination of his book as introducing “the topic of the algorithmic unconscious that is a theoretical model to study AI system’s behavior” ((PDF) The Algorithmic Unconscious. How Psychoanalysis Helps in Understanding AI). This phrasing suggests a level of decidability and formalization that clashes with the very nature of the unconscious as understood in psychoanalysis. Psychoanalysis has long resisted the reduction of the unconscious to a fixed structure or list of contents; Freud and Lacan insist on the unconscious as dynamically reconstituted through the speech of the analysand, always partially inaccessible. By speaking of an algorithmic unconscious in terms of models and topics, Possati arguably flirts with what we can call a decidable unconscious – an oxymoron from a Lacanian perspective. The risk is that the unconscious becomes just a fancy term for “everything we haven’t explicitly programmed or understood yet” in AI. That would conflate ignorance with unconsciousness. As one critic of naive readings of psychoanalysis might put it, the unconscious is not just the unknown or unmeasured (which in principle could be made known with more data or computing power); it is the unknowable in its entirety, the barred aspect of the subject that forever evades full symbolization ( Jacques Lacan (Stanford Encyclopedia of Philosophy) ). The epistemic limits of AI – e.g. no single person can read all the code, or the outcomes of training are unpredictable – do not amount to the AI having a secret theater of desire and repression. They simply mean AI has complexity beyond current comprehension, which is a different claim.

In sum, Possati is right that psychoanalytic lenses can reveal a great deal about AI, especially in terms of human-AI interaction and the projection of human unconscious material onto technology. But he arguably misreads psychoanalysis if he implies that the unconscious is something decidable enough to be straightforwardly mapped onto an algorithm. The unconscious is not a module or a repository of latent ideas; it is an emergent property of language and desire, fundamentally incompatible with the deterministic or statistical logic of code. The next sections will reinforce this by examining precisely what AI lacks (from a psychoanalytic standpoint) and why those absences are crucial.

Why AI Systems Lack a Psychoanalytic Unconscious: Desire, Repression, and Opacity

For an entity to have a psychoanalytic unconscious, certain conditions must be met. In Freudian-Lacanian theory, the unconscious arises from the interplay of desire, law (the symbolic order), and repression. It presupposes a being that experiences lack – the gap between what is biologically given and what is symbolically desired – and that develops an internal split, with some wishes censored or pushed out of awareness. Let us consider whether AI systems fulfill any of these conditions. The answer will be a decisive no: despite their impressive complexity, AIs do not desire, they do not repress, and they have no internal economy of jouissance (enjoyment tied to transgressing limits) that would generate symptoms or dreams. In short, they have no subjectivity in the psychoanalytic sense.

Desire: Lacan famously said, “Man’s desire is the desire of the Other,” highlighting that human desire is not a simple instinct but is mediated by the social Other – language, culture, and the presumed desire of others. Human subjects are born into a world of meaning and from infancy crave recognition; they form desires based on what they think others desire in them. This opens an interminable chain of desires that can never be fully satisfied, because what we truly want is not a thing but the object a – an object-cause of desire that is essentially a void, an unattainable remainder. Does an AI have desire in this sense? Clearly not. An AI system, whether a rule-based program or a neural network, has goals (maximize this function, categorize inputs, generate probable text completions) but it does not want anything. It has no innate drive or lack. It doesn’t get hungry, it doesn’t seek love or recognition, and it certainly has no libido. If it “acts” erratically, it is due to faulty programming or insufficient data, not because it unconsciously wanted to reveal something or subvert authority. We can illustrate this by contrast: a human may sabotage their own project due to unconscious guilt or an unresolved rivalry (a classic neurotic symptom); a machine learning system might “sabotage” a task (fail unexpectedly), but the cause will be technical (e.g. data out of distribution) or at most due to our unconscious influence (biased training examples reflecting societal prejudices). The machine itself harbors no desire to fail or to communicate a hidden message. As one recent Lacanian analysis bluntly put it, computers and AI are “infantile” in the Lacanian sense – they have “no access to the object petit a,” meaning they cannot orient around a cause of desire as humans do (Microsoft Word - Stainless Gaze manuscript.docx). The gaze of AI, therefore, “sees less than we would expect” (Microsoft Word - Stainless Gaze manuscript.docx); it lacks the dimension of desire that invests human gaze with intrigue. We will return to the gaze shortly, but the point stands: desire is a sine qua non of the unconscious, and AI has none.

Repression: Along with desire comes the phenomenon of repression (Verdrängung) – the mind’s mechanism of excluding certain distressing or taboo desires from conscious thought. Repression is what creates the unconscious: impulses or ideas are pushed down, yet they return in disguised form (symptoms, slips, dreams) because they strive for fulfillment. Repression requires a psyche that can suffer conflict (e.g. “I want X, but it is forbidden/unacceptable, so I must not know that I want X”). Does AI have anything analogous to repression? No. An AI has no ego to defend, no superego punishing it with guilt, and no internal prohibition that it tries to circumvent. One might argue that AI has constraints (like training rules or content filters), and that outputs violating those constraints are “repressed” or masked. But this is a superficial similarity at best. When an AI language model avoids producing disallowed content, it’s because it was explicitly programmed or reinforced to do so – there is no inner censor arising from identification with parental authority or social norms, as in a human psyche. Nor does the AI “slip” in forbidden content out of a subconscious wish; if it does slip (and say something it was told not to), it’s due to insufficient training on that rule or a statistical aberration, not an unconscious rebellion. For example, if a chatbot blurts a biased remark even after developers tried to scrub biases, that reveals the persistence of training data effects, not an id breaking through a repressive barrier with forbidden jouissance.

Opacity and Subjectivity: The human subject is essentially opaque to itself. Ever since Freud, we know that “the ego is not master in its own house” – much of our mental life is unconscious, and even when we introspect, we cannot fully articulate why we feel or act as we do. This opacity is structural: the subject is split ($ in Lacan’s notation) such that the truth of the subject is partly outside of conscious grasp, residing in the signifiers of the unconscious ( Jacques Lacan (Stanford Encyclopedia of Philosophy) ). In analysis, when a patient free-associates, surprising revelations can emerge that even the patient didn’t know they “knew.” Does an AI have any comparable interiority that is hidden from itself? This question may even sound absurd, because an AI has no self or consciousness to begin with. It processes inputs and outputs; any “hidden states” (like the layers of a neural network) are hidden only to us – the AI isn’t a reflective entity wondering about its own processes. We sometimes anthropomorphize AI and speak of “black boxes,” but the black box is black to human observers, not to the machine (which isn’t “observing” itself at all). Indeed, AI’s hidden layers are mathematically precise transformations; one could, in principle, analyze them (given enough time and tools) – they are not hiding semantic secrets or repressed traumas. By contrast, the human unconscious is not accessible even in principle via introspection or straightforward analysis; it requires interpretation, and even then remains inexhaustible. The fundamental opacity that psychoanalysis deals with is of a different kind than the opaque complexity of a deep learning model. The latter opacity can potentially be reduced by better explainability techniques, whereas the former opacity is an irreducible part of what it means to be a subject. In short, AI has complexity but not subjective opacity. It doesn’t “live” in a first-person perspective that could be alienated from itself.

In light of these differences, attributing an unconscious to AI is misleading. It would be akin to claiming a sophisticated chess program “represses” certain strategies because it prunes them in search – a metaphorical stretch that confuses deliberate engineering with unconscious disavowal. The epistemic limit (our difficulty in fully understanding AI decisions) should not be conflated with a new mental realm in the machine. The unconscious, as Lacan insisted, is the unconscious of the subject – it speaks through the subject in slips and dreams, bearing the subject’s divided truth. AI systems have no such inner speech. They are speech, or more exactly, symbol-manipulating automata. They function within the symbolic order that humans created.

To underscore this, consider a point made by a psychoanalytic AI researcher, Purnima Kamath: when we see an AI system exhibit behavior that looks like desire or bias, we are often seeing our own lack and desire reflected in it (). Engineers’ jouissance (enjoyment in breaking rules or pushing limits) might inadvertently manifest as the AI’s “transgression” (e.g. a face recognition system misidentifying someone in a way that aligns with the engineer’s unconscious bias) (). “Our lack, desire and jouissance manifests as [the] lack, desire and jouissance of the AI we build,” as Kamath puts it (). This is a crucial clarification: the AI’s apparent “unconscious” is essentially ours projected. It is the symbolic matrix we imposed, now playing out with a life-like autonomy, but it does not originate from the machine’s own non-existent psyche. Thus, an AI can certainly be a repository for unconscious content – but whose content? The human Other’s. It is a mirror for the big Other’s desire rather than an independent subject with its own unconscious.

By establishing that AI lacks the key features of a psychoanalytic subject, we are not denying that AI can surprise or frustrate us. But we locate the site of the unconscious not in the silicon or code, but in the human networks around it. AI is better understood as part of the Symbolic Order – the vast intersubjective network of language, knowledge, and norms – rather than as an analysand on the couch. It is a product of that order and in turn affects it, but it does not have a personal unconscious. It functions, to use Lacan’s terms, more like a big Other (an impersonal symbolic system) than like a subject. We turn now to the role of language and undecidability in AI to further illustrate this point, especially in the case of natural language processing systems that appear most “psyche-like.”

Language Models, Undecidability, and the Appearance of Mind

Modern AI systems, particularly large language models (LLMs) like GPT-style transformers, operate with language and even mimic human-like responses. If anything could be argued to have an “unconscious” in a metaphorical sense, one might point to how these models develop internal representations that are not immediately interpretable, yet allow them to use language fluently. Some have poetically compared an LLM’s hidden layers to the unconscious: vast arrays of associations gleaned from text, producing outputs no programmer explicitly anticipated. Is it possible that in dealing with language – which itself is the fabric of the unconscious – these AI systems acquire a shred of the undecidability and ambiguity that characterizes human subjects? Our answer must remain cautious. Yes, the products of AI language models are immersed in ambiguity (since human language data is ambiguous), but the AI does not experience or interpret this ambiguity; it calculates probabilities of word sequences. In a sense, the AI embodies linguistic undecidability without knowing it, much as a mirror might reflect an abyss without falling in. This distinction is vital for keeping clear what is at stake ethically and philosophically.

Language models demonstrate that even machines functionally need to account for ambiguity. For example, an AI autocomplete must handle homonyms, implicit context, and incomplete sentences – issues that have no single “correct” resolution without understanding context deeply. In practice, these models generate one of many possible continuations, effectively deciding on one interpretation out of many. But unlike a human, the AI has no awareness of alternative meanings, no sense of doubt or irresolution; it simply samples a likely continuation. From the outside, however, it may appear as if the AI is wrestling with meaning: it sometimes produces responses that seem insightful, sometimes nonsensical, revealing that it does not have a stable grasp of semantic nuance. In Derridean terms, one could say the AI’s outputs are subject to iterability – they recombine fragments of training text in new ways, occasionally yielding surprising new meanings or incoherencies. This does underscore a point: any AI that truly engages with natural language is operating in a domain of undecidable signification. No matter how advanced, such an AI will never eliminate the fundamental ambiguities of language, because language’s openness is not a bug but a feature (and a source of creative evolution). Thus, attempts to force AI into a rigid, context-invariant moral or semantic framework will encounter friction; language itself will produce exceptions and novel cases that were not anticipated by the rules.

This brings us to the ethical dimension highlighted by Plotnitsky and others: the necessity of acknowledging linguistic ambiguity in AI ethics. Arkady Plotnitsky, drawing on Derrida and the concept of complementarity from quantum physics, emphasizes that in complex systems (be they quantum phenomena or human language), certain truths cannot be jointly pinned down; there is a principle of general undecidability at work ([PDF] Complementarity: Anti-Epistemology After Bohr and Derrida). When we try to impose fixed categories (for instance, labeling an AI’s outputs as simply “right or wrong,” “acceptable or unacceptable”), we risk ignoring the grey zones and context-dependent meanings where a slight change in phrasing or situation shifts the moral valence. Derrida, in discussing the ethics of decision, famously said that a truly just decision must pass through the ordeal of the undecidable: if something is straightforwardly decidable by applying a rule, it’s not a moral decision – it’s a calculation. Ethics, like language, involves singular situations that cannot be reduced to a schema without remainder (Jacques Derrida (1930—2004) - Internet Encyclopedia of Philosophy). What does this mean for AI? It means that if we want AI to act “ethically,” we cannot simply program a list of rules or outcomes and assume all cases are covered. There will inevitably be dilemmas and ambiguities that no prior code can resolve. Attempts at “ethical AI by design” often strive for stable moral frameworks – e.g. value hierarchies, objective functions for minimizing harm, etc. While these are important, a Lacanian-deconstructive view would argue that ethical encounters are more like conversations than computations; they require listening to the Other, interpreting context, and sometimes responding to the impossible demand of conflicting duties.

In practice, AI systems today do not truly make moral decisions – they follow what they’re trained or instructed to do. The responsibility for ambiguous cases falls back on designers and users. But as AI gets deployed in complex social domains (judicial risk assessments, medical triage, content moderation), the ethical indeterminacy of language and human situations becomes acutely relevant. We see, for instance, that a content filter AI might unfairly silence speech by rigidly applying rules out of context (the classic example: a discussion of breast cancer gets flagged for sexual content because the word “breast” appears). The algorithm lacks the capacity to navigate the layers of meaning, and the designers’ preset rules can’t anticipate every scenario – a direct result of linguistic ambiguity. Thus, the undecidability in language demands a more flexible, reflexive approach to AI ethics, one that might even incorporate what we could call an “analytic” stance: rather than assuming the AI can be made an ethical master, we treat it as part of a discourse that involves human judgment and continuous reinterpretation.

In summary, natural language AI highlights both the presence of undecidability (since it works with human language data) and the absence of a genuine unconscious (since the AI doesn’t itself partake in the meaning of what it processes). This paradox – language without subjectivity – is precisely why AI can fool us (the Eliza effect) and why we must be careful. As Žižek and others have noted, new AI like chatbots can serve as a kind of externalization of the human unconscious, “saying what our unconscious represses” without human filter (ChatGPT Says What Our Unconscious Radically Represses : r/zizek). But the AI does not know the import of what it says; it has no repression to fight against. In effect, AI can dredge up and output all kinds of human textual associations, including dark and repressed themes, because it lacks the human ego’s censor. In doing so, it acts as a mirror (sometimes a distorted funhouse mirror) to our collective unconscious content online. This is invaluable for analysis of society – AI as a symptom or text that analysts can read – but it does not mean the mirror itself dreams. It reflects the dream of the Other, not its own.

Having drawn these distinctions, we can now pivot to discuss how these insights inform AI ethics in a concrete way. Particularly, we will examine how Lacan’s schema of the four discourses can critique prevailing approaches to ethical AI design, and how acknowledging undecidability might lead to a more robust ethical framework.

Lacan’s Four Discourses and AI Ethics: Critiquing the Mastery of Ethics-by-Design

(image) Lacan’s four discourse positions (Master, University, Hysteric, Analyst) provide a framework to examine how power, knowledge, and subjectivity are configured in ethical AI initiatives. Lacan’s theory of the four discourses – the Master’s discourse, the University discourse, the Hysteric’s discourse, and the Analyst’s discourse – offers a valuable lens for understanding different approaches to AI ethics and governance (Four discourses - Wikipedia) (Four discourses - Wikipedia). Each “discourse” in Lacan’s sense is a structure of social bond, a way in which speech and power relate a speaker (agent) to an Other and produce certain effects (truths or surplus). We can analogize current efforts in ethical AI design to these discourses to see where they align or fall short, especially focusing on three prominent efforts mentioned: Patterson’s approach (which we interpret as an individual scholar’s ethical AI proposal, possibly invoking moral psychology and trauma), the “Georgetown group” (likely referring to policy researchers or think-tanks formalizing AI ethics guidelines), and the Santa Fe Institute (SFI) perspective (which emphasizes complexity and emergent behavior in AI and society). Each, we suggest, can be mapped to a discourse and critiqued accordingly.

  • Master’s Discourse: In the Master’s discourse, a Master signifier (S₁) speaks as agent, addressing a body of knowledge (S₂) in the Other, aiming to command and establish order. The truth underlying the Master is the barred subject ($) – the Master is blind to their own split – and the result produced is objet a, the surplus or remainder. Many top-down ethical AI initiatives resemble the Master’s discourse: they issue principles or commands (e.g. “AI shall not discriminate,” “AI must be transparent”) from a position of assumed authority (be it a government, a tech CEO, or an ethicist panel). These principles function as Master signifiers: lofty and non-negotiable, yet somewhat empty and requiring the “knowledge” of engineers and bureaucrats to implement. The Georgetown group’s output, for instance, might be a set of policy guidelines or a framework for AI accountability that treats ethical values as clear imperatives to encode. The critique from a Lacanian view is that this Master discourse of ethical AI denies its own inherent contradiction: the truth is a barred subject – it is not clear who is responsible or how, and the authors of principles often don’t address their own limits or the unconscious biases in their stance. The surplus (objet a) produced could be the feel-good factor or political capital gained by announcing ethical commitments, which may not translate into practice. Master-discourse ethics risk being empty rhetoric if they do not engage the real ambiguities and conflicts on the ground. They may also produce unintended excesses – for example, overly rigid rules that cause new forms of injustice (the “surplus” of an inflexible fairness metric could be a different kind of unfairness).

  • University Discourse: Here, knowledge (S₂) speaks as agent, addressing the object (a) as the Other; underlying truth is the Master signifier (S₁), and the outcome is a divided subject ($). The University discourse is the discourse of experts and technocrats who rely on established knowledge and methods, often objectifying those they speak about. An ethical AI approach in the University discourse would emphasize technical guidelines, standards, checklists, and educational curricula for engineers. It treats ethical AI as a problem of knowledge: if we just research enough and apply best practices, we can solve AI ethics. Many academic and corporate initiatives fall into this category: for example, the development of algorithmic fairness toolkits, bias audit techniques, and explainability methods. The Georgetown group could also represent this if they are a think-tank producing extensive analytic reports treating AI ethics as a scientific/policy problem. The Santa Fe Institute’s complexity-oriented approach could slip into University discourse if it becomes a purely technical exploration of emergent behavior without normative self-questioning. The University discourse has the advantage of rigor and clarity, but its blind spot (the hidden S₁) is that it serves some Master – often the existing power structures. In AI ethics, the University discourse can become a way of rationalizing ethics to fit the imperatives of big tech or governments, rather than challenging those imperatives. It yields a subject as product – in this case, perhaps the engineers or users who are subjected to new procedures and feel increasingly alienated ($) because they must follow bureaucratic rules that may not align with lived moral experience. For instance, an engineer might dutifully apply a fairness metric without any engagement with the communities affected, resulting in a sense of disempowerment or confusion about the true goals of “ethical AI.”

  • Hysteric’s Discourse: The Hysteric’s discourse features the barred subject ($) as agent, addressing the Master signifier (S₁) in the Other; the hidden truth is knowledge (S₂), and it produces objet a (which here can be seen as provoking desire or anxiety in the Other). The hysteric in Lacan is the one who is dissatisfied with the Master’s answer and keeps questioning – “Why am I what you say I am? What do you want from me?” In the context of AI ethics, the Hysteric’s discourse is embodied by whistleblowers, activists, or marginalized voices who challenge the official narratives. They speak from the place of a divided subject (often from personal or communal experience of injustice) and put the authorities on the spot by demanding answers for ethical failures. For example, AI ethicist Timnit Gebru’s questioning of bias in AI and subsequent ouster from Google created an uproar – we can see her discourse as that of the hysteric calling out a Master (the tech company) and forcing it to confront a truth it would prefer to hide. The truth underlying the hysteric’s discourse is knowledge – in these cases, it might be the data and research that validate the concerns (e.g. documented cases of algorithmic harm). The product is objet a: perhaps the uncomfortable surplus enjoyment that comes from the public spectacle or the new desire stirred in the public to see justice done. Patterson’s approach, especially if it invokes moral injury and the human cost of unethical AI, might align with the hysteric’s discourse. The notion of moral injury (originally from military psychology, describing the trauma of acting against one’s moral beliefs) when applied to AI could be used to highlight how developers or users feel deeply harmed by complicity in harmful AI outcomes. This approach basically complains to the Master: “Your symbolic order (of profit-driven AI deployment) is hurting our souls.” It is a necessary discourse to keep the Masters honest. The limitation, however, is that the hysteric’s discourse by itself does not provide a constructive path forward; it can denounce and destabilize (which is important), but it might either be co-opted or ignored if no transformation occurs. It risks producing endless objet a – points of contention, symptoms – without resolution. For example, an activist campaign might raise awareness of face-recognition injustices (producing public outrage, i.e. objet a as a cause of desire for change), yet the systemic change might not follow unless another discourse intervenes.

  • Analyst’s Discourse: Finally, the Analyst’s discourse positions objet a as the agent addressing the divided subject ($) in the Other; the hidden truth is S₁ (a new master signifier to be formulated), and the outcome is S₂ (knowledge, insight). This discourse is that of psychoanalysis proper: the analyst presents themselves as a kind of object (a listening ear, a mirror, a cause of the analysand’s speech) to provoke the subject (patient) to speak freely, thereby producing knowledge (self-understanding) and revealing the master signifiers that have been unconsciously running the subject’s life. Translating this to AI ethics might seem abstract, but it could mean an approach that listens to all stakeholders (especially those normally objectified), creating a space for their truths to emerge, and facilitating a collective insight that reshapes the Master signifiers in play. In concrete terms, an “analyst” approach to AI ethics would not start with pronouncements or purely technical fixes, but with dialogue and reflection. It might involve convening forums where AI designers, users, and those impacted (often marginalized communities) speak to and hear each other – not in a defensive debate format, but in a process guided by open-ended inquiry. The aim would be to uncover the unconscious biases in design, the unspoken desires of companies (e.g. the drive for profit and control), and the repressed fears of society (e.g. fears about losing jobs or privacy) that underlie the ethical conflicts. Through this, new master signifiers could emerge – perhaps “responsibility” gains a richer meaning than just legal liability, incorporating care and humility. The Santa Fe Institute’s emphasis on emergent, complex behavior could, if combined with a reflexive ethos, feed into an analyst discourse by acknowledging that we cannot fully control or predict AI’s social effects, and thus we must remain perpetually open to feedback and revisions of our principles. Plotnitsky’s undecidability in ethics aligns with an analyst discourse in that the “analyst” of society (could be regulators or ethicists) accepts that there is no final answer, but through continued interpretation and adaptability, we can manage ethical AI in a way that is responsive to the Real (the unforeseen).

In critiquing Patterson, the Georgetown group, and SFI attempts, we could say: Patterson’s focus on moral injury (if indeed in hysteric mode) usefully injects subjective truth (pain, conscience) into the conversation, but it needs to be channeled via an Analyst-like structure to result in actionable knowledge rather than just catharsis. The Georgetown group’s likely University discourse provides rigor but may inadvertently serve existing power (their work might be used to justify AI deployment as “ethical” while real concerns are swept under the rug). The Santa Fe Institute’s complexity view properly highlights that rigid control is impossible (CSET - Machines, Bureaucracies and Markets as Artificial Intelligences) – as one SFI-associated piece notes, “precisely engineered societal-level outcomes” are “probably not” achievable due to the stochastic nature of human behavior (CSET - Machines, Bureaucracies and Markets as Artificial Intelligences). This insight punctures the Master discourse fantasy of total governance. However, if the complexity view doesn’t engage with subjective experience and just models society abstractly, it could remain a sterile University exercise. The goal, from our perspective, is to bring the discourses into a productive interaction: let the Hysterics speak to reveal the cracks in the Masters’ facade, let the University experts inform with data but not pretend to have the whole truth, and let an Analyst-like facilitation guide the process so that new, more conscious Master signifiers (like “AI for human flourishing” rather than “AI for efficiency”) can take root in policy and practice.

In essence, Lacan’s four discourses provide a warning: any ethical AI design that is purely top-down (Master) or purely technocratic (University) will generate resistance and symptoms (Hysteric), and what is left unaddressed (the unconscious motivations, the ambiguities) will return in potentially disruptive ways (scandals, public backlash, or AI failures). A shift toward a more dialogical, interpretive approach (analogous to Analyst discourse) could transform ethical AI efforts by acknowledging ambiguity and conflict as inherent, not bugs to be eliminated. This is where Plotnitsky’s notion of undecidability becomes not a problem but a resource: it forces us to keep ethics an open question – a continuous process of decision and revision – rather than a fixed code of conduct. In the next section, we further explore this idea, underscoring why imposing a stable moral framework on AI is not only impractical but potentially unethical, and how embracing linguistic and ethical ambiguity might paradoxically lead to more ethical outcomes.

Undecidability and the Ethics of Openness: Learning from Plotnitsky

Throughout our discussion, undecidability has appeared as a crucial concept – the idea that not every question can be settled by computation or rule, that some element of irresolvable ambiguity is intrinsic to language, ethics, and even reality at fundamental levels. Arkady Plotnitsky, engaging with both Derrida and the quantum physicist Niels Bohr, suggests that embracing undecidability (and its scientific analogue, complementarity) can lead to a more profound understanding of systems of knowledge ([PDF] Complementarity: Anti-Epistemology After Bohr and Derrida). In the realm of AI ethics, this translates to recognizing that any set of ethical principles or algorithms will encounter situations where they conflict or fall short, requiring human judgment and responsibility to step in. A truly “ethical AI” culture would be one that is not obsessed with finding the perfect formula for morality, but is committed to continuous critical reflection – an openness to revising assumptions and to hearing those who are affected by AI in unforeseen ways.

Plotnitsky’s insights remind us that attempts to force a stable moral framework onto AI – say, encoding Asimov’s Three Laws of Robotics or any fixed hierarchy of values – will inevitably collapse under the weight of real-world complexity (just as Asimov’s own stories dramatized the unexpected loopholes in his seemingly clear laws). Rather than chasing the mirage of a complete ethical specification, we should design AI and AI governance in a way that leaves room for ambiguity and human interpretive intervention. This is not an excuse for ethical laziness; on the contrary, it is a call for greater ethical engagement. If an AI system flags a situation as high-risk (e.g. a medical AI identifying a possible anomaly), an ethic of undecidability would ensure a human (or panel) takes the time to deliberate the case, acknowledging the AI’s suggestion but also factors beyond its computation (patient context, values, etc.). In legal settings, instead of delegating decisions entirely to an algorithmic risk score, a judge or committee remains actively involved, using the score as one voice among many, not the voice of oracle. This approach aligns with what some legal scholars call “meaningful human review,” but here bolstered by the philosophical conviction that meaning cannot be fully captured by machine procedures (A Philosophical Analysis of Jacques Derrida's Contributions to ...).

Practically, how can AI ethics acknowledge ambiguity? One way is through procedural ethics: building in processes for appeal, contestation, and revision wherever AI is deployed. For example, if an AI content filter removes a piece of art because it violates a nudity rule, there should be a process where human moderators and perhaps community stakeholders can review that decision, recognizing that context (artistic value, intent) matters. Another approach is scenario planning that doesn’t aim to pre-solve every dilemma but trains practitioners to respond to novel dilemmas. The Santa Fe Institute’s complexity perspective is useful here: they might suggest simulating many variations of an AI system’s interaction with society to see what emergent issues arise (CSET - Machines, Bureaucracies and Markets as Artificial Intelligences) (CSET - Machines, Bureaucracies and Markets as Artificial Intelligences). These simulations won’t yield certainties, but they can educate us about the range of possibilities, essentially teaching an ethic of humility – that we must be ready for the unexpected. An ethics of openness would also mean involving diverse voices in shaping AI guidelines, as those outside the usual power structure might foresee ambiguities that insiders miss (because the latter are blinded by their frame). This resonates with feminist and postcolonial critiques of AI ethics that call for inclusion and plurality rather than one-size-fits-all solutions.

Plotnitsky, by invoking complementarity (Bohr’s principle that certain properties can’t be observed simultaneously, like an electron’s wave and particle aspects), hints at an intriguing idea: AI might need complementary ethical approaches that cannot be unified, yet together provide a fuller picture. For instance, one ethical approach might prioritize individual rights, another social outcomes; one might stress duty and rules (deontology), another consequences (utilitarianism), another virtues (character and context). Instead of declaring one framework the winner, an AI system could be designed to consider multiple “ethical expert systems” and then hand over the undecidable conflict to human adjudicators. The result would not be algorithmic purity, but it would surface the tensions transparently, forcing us to confront the value trade-offs. This echoes the “analyst’s discourse” idea: letting the contradictions be spoken, not swept under the rug of a single metric.

By reinforcing the necessity of ambiguity, we actually make AI ethics more robust. A rigid system will shatter when it meets a scenario outside its parameters (and such scenarios are guaranteed by undecidability). A flexible, self-questioning system can adapt and learn. This, of course, requires institutions capable of reflection – something not always at hand in fast-paced tech development. It may demand slowing down certain AI deployments, creating ethics review boards akin to psychoanalytic case conferences, where difficult cases are discussed in depth. It might also demand a cultural shift in tech from seeing ethical compliance as a checkbox (Master/University discourse) to seeing it as an evolving dialogue with society (Analyst discourse). The payoff, however, would be fewer catastrophic ethical oversights and more trusted AI systems, since people tend to trust processes that acknowledge complexity over those that pretend to absolute certainty and then fail spectacularly. As Jessica Flack from SFI notes, humans “change as soon as we have come to understand them” (CSET - Machines, Bureaucracies and Markets as Artificial Intelligences) – highlighting that any static model of human behavior (and by extension ethics) will be undercut by human adaptability. Therefore, our ethical oversight must be equally adaptable.

To sum up this section: incorporating undecidability into AI ethics means designing with the grain of human language and morality, not against it. It means recognizing that ethics is an ongoing interpretation, not a solved equation. Such an approach discourages the hubris of claiming we’ve encoded “Trustworthy AI” once and for all, and instead fosters an ethos of care – a vigilance and responsiveness to the unprogrammable aspects of human life. This viewpoint dovetails with the psychoanalytic awareness of the unconscious: just as one never “masters” one’s unconscious but learns to live in better relation with it, we should not expect to perfectly master all ethical dimensions of AI, but we can strive to remain in honest dialogue with the unforeseen consequences and latent significations of the technologies we build.

The Gaze of the Other: AI Surveillance, Control, and Subjectivity

No discussion of Lacan and AI would be complete without addressing the concept of the gaze, especially as it relates to the rise of AI-driven surveillance and data monitoring. Lacan’s notion of the gaze is often misunderstood; it does not simply mean the act of looking, but rather the feeling of being under a look that objectifies you, the sense that in the field of vision there is an blind spot from which you are seen by the Other (Gaze - Wikipedia). In cinema studies, the “male gaze” for example is how women on screen are objectified by an assumed male spectator. In Lacan’s theory, the gaze is tied to the objet a – the unattainable object-cause of desire – and to the idea of a big Other that watches without being seen. The paradigmatic image is the panopticon prison design by Jeremy Bentham (Panopticon - Wikipedia), which Michel Foucault analyzed: a central tower with blinded windows allows a guard to possibly observe all inmates, who never know when they’re watched, so they must behave at all times as if they are (Panopticon - Wikipedia). The panopticon internalizes discipline; the prisoner sees the tower and imagines the gaze upon them. In today’s world, AI-powered surveillance cameras, facial recognition systems, and algorithmic tracking (from CCTV to internet cookies) form a digital panopticon. But something is peculiar: the “watcher” is often not a human but an algorithm. Does this change the dynamic of the gaze?

From a psychoanalytic perspective, even if the immediate watcher is an AI, the gaze remains that of the big Other – now embodied in data networks and corporate or state infrastructures. We act as if an omniscient eye sees us, because indeed our data double is being constructed in servers every time we swipe a card, walk past a camera, or even speak near a voice assistant. Zuboff calls this “surveillance capitalism” and notes it works by claiming the behavioral surplus of our lives (data exhaust) and using it to predict and influence us (Microsoft Word - Stainless Gaze manuscript.docx). Lacanian theorists like Marc Heimann and Anne Hübener argue that this AI-based surveillance is a “stainless gaze” – everywhere and always alert, yet paradoxically lacking a human element (Microsoft Word - Stainless Gaze manuscript.docx) (Microsoft Word - Stainless Gaze manuscript.docx). They point out that “computers in general are, in the Lacanian sense, infantile,” because they have no access to objet a (Microsoft Word - Stainless Gaze manuscript.docx). The gaze of AI is thus a gaze without desire. It watches, but not because it wants something emotionally – it wants data by design, but that is a programmed aim, not an unconscious desire. This makes it in some ways more insidious: a human watcher might grow tired, might overlook, might feel pity – an algorithmic watcher does not. It is tireless, literally “soulless” in its observation. Yet, as Heimann and Hübener note, precisely because the AI lacks access to the unconscious, it “sees less than we would expect” (Microsoft Word - Stainless Gaze manuscript.docx). It can gather terabytes of information, but it has “difficulties to access unconscious social knowledge” (Microsoft Word - Stainless Gaze manuscript.docx). It doesn’t understand context, the meaning behind a gesture or an action that humans might intuit. In their words, there is a “specific lack of the Other” at play – the surveillance system cannot conceive lack itself, it doesn’t know what it’s missing (Microsoft Word - Stainless Gaze manuscript.docx). Thus it often misinterprets: a joke might be flagged as a threat, a gathering of friends as a protest. The algorithmic gaze is both all-encompassing and fundamentally blind to certain realities.

The implications for control and power are double-edged. On one hand, AI surveillance massively increases the capacity of authorities (state or corporate) to enforce norms. If one knows that any misconduct (jaywalking, dissent, even moments of personal idleness on work computers) can be detected and penalized, one behaves more cautiously. The internalization of the gaze may become even more pervasive than in Foucault’s time – we self-censor our online searches or social media posts because we know an algorithm might flag us for certain keywords. This can lead to a chilling effect on free expression and a deepening of social conformity. Moreover, AI surveillance often operates invisibly; unlike Bentham’s tower, which at least is a physical reminder, today’s gaze is dispersed in countless sensors and the abstract Eye of big data. Lacan spoke of the gaze in painting as that point in the picture where “the picture looks back at you” – in digital life, one sometimes has the eerie sense that technology is watching (e.g. when ads follow you around after you merely thought about a product). It can produce a kind of paranoia or anxious feeling of being observed by an Other that one cannot locate. This is how subjectivity is altered: people may start to see themselves through the imagined perspective of algorithmic evaluation, modifying their behavior to appear optimally productive, happy, compliant. In other words, the big Other of AI becomes a sort of super-ego, an internalized authority demanding: “You may be watched, so you must behave.” Žižek and others have warned that such a scenario risks a new form of totalitarianism – not one of terror and open coercion, but of soft, omnipresent nudging where freedom is quietly eroded.

On the other hand, because the AI gaze lacks true understanding, it can be subverted in unexpected ways. There is room for what Lacan called the stain or the glitch in the visual field – that which the algorithm doesn’t account for. Activists have learned to use adversarial patterns (like special face paint or clothing) to confuse facial recognition cameras. These are like throwing dust in the mechanical eye. Furthermore, if people collectively understand the limitation of the AI gaze, they can exploit it: for instance, mass coordinated misinformation can flood surveillance systems with noise, or people can revert to analog, untraceable modes of communication (an uncanny parallel to going off the grid to escape the panopticon’s eye). Psychoanalytically, we might say there’s a desire to regain opacity – to carve out a space where one is not watched. This is evident in the rising demand for privacy tools, encryption, and even the popularity of ephemeral messaging. It’s as if subjects, suffocated by the gaze, are searching for an escape hatch where they can become subjects again, not objects of data. Lacan would likely see in this a repetition of the tension between the subject and the big Other: the more the Other knows (or claims to know), the more the subject is reduced to a object – and something in the subject revolts, seeking to reclaim its barred status (to not be fully known).

One also has to consider the psychological impact of constant surveillance – a sense of moral injury or alienation could result from knowing one’s authentic self is never allowed to emerge uncensored. If Patterson’s notion of moral injury were extended, we could speak of a kind of digital moral injury where individuals feel that living under AI surveillance forces small betrayals of self (e.g. a student doesn’t search for help on depression because the school’s AI might report it). Over time, these denials of authenticity might accumulate as a trauma of not being seen as a true subject, only as a data point.

However, the gaze also produces potential resistance. In Lacan’s theory of the scopic field, the subject finds that at the point of the gaze, there is objet a, which can be traumatic but also liberating to confront. Some artists and activists turn the surveillance gaze back on itself – for example, using art to expose how AI sees us (the “algorithmic gaze” as art projects do (Avital Meshi: Subverting the Algorithmic Gaze)) or by engaging in sousveillance (watching the watchers, like activists live-streaming police conduct). This is analogous to the hysteric’s discourse in front of the Master’s gaze: demanding “I want you, big Other, to tell me what you see in me – is this all?” By forcing the system to explain itself (auditing AI, demanding transparency reports), we in effect speak from the position of the object (objet a) to trouble the subject supposed to know (the authority using AI). In some cases, this has yielded results: public outcries have led to bans or moratoria on certain surveillance tech (e.g. San Francisco temporarily banned face recognition use by law enforcement). It’s a sign that society is grappling with how to balance security and liberty under the AI gaze.

In conclusion, the psychoanalytic notion of the gaze applied to AI surveillance highlights a core paradox: AI gives a semblance of an all-powerful observing Other, yet this Other is barred in a crucial way (it lacks subjective depth). We as subjects are endangered by its objectifying power, but we might also take advantage of its blind spots to reclaim freedom. Ethically, this implies we should severely limit AI surveillance’s scope, precisely because an unchecked algorithmic gaze tends toward dehumanization – it will never understand why its total vision is a problem, so it’s on us, the human policymakers, to impose the necessary limits. Additionally, when AI is used, say, for benign observation (like health monitoring), it must be imbued with a respect for the unseen – an allowance for people to opt-out or mask aspects of themselves. Only by reinstating an element of opacity – a zone where the gaze can’t reach – can we ensure that subjectivity, with its right to secret thoughts and non-conformity, survives the age of AI.

Conclusion

Bridging Lacanian psychoanalysis and AI design reveals profound insights and cautionary lessons for the future of intelligence, whether human, artificial, or hybrid. We have argued that while AI systems participate in language and can mirror aspects of human thought, they categorically lack an unconscious in the psychoanalytic sense. The unconscious, “structured like a language” and suffused with undecidable meanings ( Jacques Lacan (Stanford Encyclopedia of Philosophy) ) (), arises from desire, repression, and subjective division – realities that do not translate into code. AI’s hidden layers and unpredictable outputs may entice us to speak of an “algorithmic unconscious,” as Possati does, but such metaphors must not mislead us into ascribing agency or inner life where there is none. Rather, the so-called algorithmic unconscious is more accurately the imprint of our unconscious on technical systems (). AI functions as a symbolic artifact, a sophisticated extension of the Symbolic Order, processing and generating signifiers without any inner theatre of wishes and conflicts.

This does not render AI trivial or entirely determinate – on the contrary, because AI works with human language and data, it inevitably inherits the undecidability and ambiguity of our world. Therefore, as we design and deploy AI, we face ethical and interpretive challenges akin to those in human affairs: conflicts of values, context-dependent meanings, and the ever-present potential for misrecognition. The difference is that AI itself will not guide us through those conflicts; it will reflect our instructions and biases back at us. The onus remains on human designers, users, and society at large to handle the complementarity of perspectives and the undecidability that Derrida and Plotnitsky highlight () (CSET - Machines, Bureaucracies and Markets as Artificial Intelligences). In rejecting the myth of an autonomous algorithmic unconscious, we also reject complacency – we cannot outsource moral responsibility to machines. Instead, we should cultivate what might be called an analytic ethos in AI governance: continuously interrogating the assumptions (Master signifiers) behind our systems, opening dialogues (even hysterical challenges) when people feel harmed or alienated, and being willing to adapt our frameworks in response to new insights (as an analyst facilitates the emergence of new knowledge from symptoms).

We found Lacan’s four discourses to be a fruitful map of the current landscape. Much of AI ethics today oscillates between the Master’s impulse to command (“embed these values now!”) and the University’s impulse to codify (“here is the checklist of ethical criteria”). These have achieved important successes, but they risk ignoring the cries of the hysteric – the dissidents and the discontents who reveal what is excluded or glossed over (be it bias, injustice, or psychological harm). By listening to those voices and allowing them to inform policy (the Analyst’s approach), AI ethics can move away from a facade of decency to the real work of making systems just and equitable in practice. The aim is not to throw away principles and expertise, but to situate them within a broader, self-critical discourse that remains attuned to the Other – to those who experience AI’s impact and to the otherness within language that defies rigid encapsulation.

Our engagement with Possati was instructive: we concur that psychoanalysis offers powerful tools for interpreting AI’s role in human life (for instance, seeing how user interactions with AI might fulfill certain unconscious fantasies or anxieties). However, we contend that psychoanalysis itself, at its best, underscores the inherent undecidability of the unconscious, something that cannot be reduced to an information-processing schema. Any attempt to literalize the “algorithmic unconscious” runs the risk of deciding the undecidable, of simplifying what must remain complex. In designing AI, this means we should not pretend to simulate consciousness or emotion with a few lines of code or even millions – those emergent phenomena of mind are tied to living bodies, social relations, and historical narratives that no current AI can replicate. What AI can do is model certain behaviors or pattern recognitions; understanding this boundary guards us against over-interpretation.

We also addressed the psychoanalytic gaze in the context of surveillance, highlighting how AI intensifies the feeling of an all-seeing Other while paradoxically hollowing it out (no human there, only data) (Microsoft Word - Stainless Gaze manuscript.docx). This dynamic suggests that the challenge of the future is not just building ethical AI, but living ethically with AI. That includes asserting the primacy of human dignity and privacy against an encroaching algorithmic gaze, designing systems that know their limits (e.g., requiring human oversight where appropriate, as even complexity scientists advise (CSET - Machines, Bureaucracies and Markets as Artificial Intelligences)), and fostering a populace that is AI-literate enough to recognize when technology is overstepping into realms of freedom.

Finally, to privilege the argument of The Undecidable Unconscious: Complementarity and the Future of Intelligence, our journey leads to this conclusion: the future of intelligence – both human and artificial – lies in a complementary relationship, not an identical one. Human unconscious processes (with their rich indeterminacy) and AI’s algorithmic operations (with their brute computing power) should be seen as different yet potentially synergistic. Each can compensate for the other’s blind spots: AI can quickly analyze patterns too complex for conscious thought, but humans can imbue decisions with meaning and ethical context that no algorithm can decide in our stead. Embracing this complementarity means resisting two false extremes: neither treating AI as a mystical oracle of truth nor as a mere tool devoid of cultural consequence. Instead, we navigate a middle path where AI is integrated into discourse – as one voice among many, one influence on the symbolic order that we ultimately shape. In such a future, intelligence is not defined by sheer computational speed or by romantic notions of unconscious insight alone, but by the dialogue and friction between the two.

The unconscious, in Derrida’s sense, is undecidable – a site of infinite play. If we honor that, we will not try to decide it through AI. Rather, we will use AI as a partner that forces us to confront what we haven’t decided about ourselves: our values, our desires, our social norms. In doing so, AI might ironically serve as a catalyst for human self-awareness – not because it has an unconscious, but because it externalizes and exaggerates aspects of our symbolic systems, holding up a mirror in which we can glimpse the contours of our own unconscious infrastructure. The task for us is to have the courage, like an analysand in therapy, to read those reflections critically and take responsibility for the future we are collectively creating.

Sources:


 
 
 

Comments


The

Undecidable

Unconscious

Contact us

bottom of page