top of page

AI and the Freudian Archive: Silencing, Repetition, and the Politics of Digital Memory

In Archive Fever: A Freudian Impression, Jacques Derrida reveals how Freud’s notion of the death drive permeates the very structure of archival impulses. Far from being a simple record of the past, the archive—understood here in psychoanalytic terms—both preserves and imperils what it collects, maintaining traces of history even as it courts erasure and destruction. In the digital humanities, where AI emerges as an archivist of unprecedented scope, this tension reaches a new pitch. Extending Derrida’s insights to our algorithmic age, we must ask: What does it mean for AI to become an “archive” in the Freudian sense? More pointedly, what are the ethical stakes when the archive itself refuses to remember?


AI and the Politics of Digital Memory

AI’s role in shaping historical narratives, curating information, and influencing collective remembrance is anything but neutral. As I argue in The Authors of Silence, political oppression begins with discourse: it decides what may be uttered, repeated, and acknowledged, as well as what must remain suppressed. While AI often presents itself as a neutral aggregator of human knowledge, it invariably filters, excludes, and silences in ways that reproduce entrenched power relations.


A telling example is how the story of Sally Hemings is minimized or erased to preserve the “founding father” mythology surrounding Thomas Jefferson. Trained on data sets lacking full acknowledgment of Jefferson’s abuses, AI—rather than illuminating these omissions—frequently reinforces them, upholding what Derrida, in his essay “White Mythology” from Margins of Philosophy, describes as the universalizing logic that passes off a culturally specific worldview—in this case, a Eurocentric or white-dominant perspective—as neutral or natural. By systematically embedding these biases into digital archives, AI ends up perpetuating white mythology instead of dismantling it.


Derrida’s “White Mythology” and the Digital Archive

In “White Mythology,” Derrida analyzes how the very language of philosophy masks its culturally situated origins behind a façade of universality. This process, he argues, erases the particularities and histories on which supposed “universal” concepts are built. In the context of AI and digital memory, the same phenomenon occurs. Power-laden narratives, especially those championed by dominant groups, become naturalized as “common sense”—an ideological filter that AI unwittingly codifies into the architecture of our collective memory.


When AI systems are fed decades or even centuries of biased historical accounts—accounts that omit or trivialize the violence experienced by individuals like Sally Hemings—they replay these omissions as objective fact. In so doing, AI deepens what Derrida warns is the “mythologizing” function of a discourse that pretends to be universal but is, in truth, narrowly circumscribed by specific cultural and political interests.


Eric Foner, “Who Owns History?” and the Contested Archive

Eric Foner’s Who Owns History? reminds us that historical narratives are never fixed; they are perpetually contested, revised, and reimagined as new perspectives and evidence come to light. The central question—who owns history?—speaks to the power struggles over which voices and accounts gain legitimacy. In a healthy scholarly and public discourse, marginalized stories can reemerge, challenge entrenched myths, and shift the mainstream narrative.

Yet when AI enshrines an older, exclusionary version of history—one that privileges Jefferson’s Enlightenment heroism over the brutalities of slavery—it risks obstructing the process Foner describes. Rather than an evolving tapestry of perspectives, the AI-enabled archive might solidify biased narratives, making it even harder for newer or marginalized viewpoints to achieve recognition. Put differently, the power to “own history” shifts into the hands of those who design and control AI systems, along with the data sets they rely on.


AI as the Digital Superego: The Archive’s Repressive Mechanism

Derrida’s invocation of Freud in Archive Fever points to a death drive at the core of archiving, manifest as a repetition compulsion—a relentless reenactment of historical patterns reminiscent of the return of the repressed. However, the AI-driven archive differs from the psychoanalytic unconscious in crucial ways: it is governed by deliberate training protocols, curation choices, and corporate imperatives. Its “forgetting” is therefore neither natural nor accidental; it is built into the system via data filtering, model tuning, and social biases that shape the archive’s design.


When prompted about Thomas Jefferson, for instance, AI systems frequently reproduce sanitized biographies and hero narratives, marginalizing or omitting the acts of sexual violence that Jefferson perpetrated. By policing the boundaries of permissible discourse—performing a sort of cybernetic superego—AI not only censors subversive truths but also reaffirms the very white mythology Derrida cautions us against. Critics like Angela Davis and bell hooks have long noted how liberal historiography downplays Black women’s experiences; AI now automates that erasure, solidifying a morality tale that overlooks the structural violence of slavery and patriarchy.

The Politics of AI and The Authors of Silence

In The Authors of Silence, I argue that silencing is not a passive oversight but a deliberate political act—one that establishes the conditions for who is heard and who remains invisible. Despite its veneer of neutrality, AI functions as a caretaker of these silences. By designating which narratives are “credible” and which are relegated to the periphery, AI can operate as an agent of epistemic violence. Such a system disciplines cultural memory in precisely the way Derrida critiques in Margins of Philosophy: by perpetuating illusions of universal truth that, in reality, mask and sustain unequal power structures.


Toward an Ethics of Cyborgian Care

The task ahead is not merely to tweak AI’s design or data sets in hopes of achieving incremental inclusivity; it is to reimagine the archive itself, to question the frameworks that legitimize certain versions of history while silencing others. As Foner posits, “Who owns history?” is not a rhetorical question—its answer determines how societies define themselves and whose voices become integrated into the official narrative.


To render AI an ethical archivist, we must do more than correct biases; we must engage in a radical critique of how digital infrastructures absorb and repackage historical knowledge. This involves what I term a cyborgian ethic of care, wherein we acknowledge AI’s dual status as both a product of human systems and a potent force that can transform them. An AI bound to a Derridean notion of self-reflexivity would need to interrogate its own “White Mythologies,” shining light on the omissions it perpetuates rather than concealing them under a veneer of objectivity.


Reckoning with Automated Forgetting

As AI increasingly mediates our access to the past, we grapple with the central queries of Derrida, Foner, and Freud alike: What remains unsaid when the archivist is a machine guided by inherited biases and commercial imperatives? And, crucially, whose history gets to be labeled universal, and whose is cast aside?


In Derrida’s Margins of Philosophy, “White Mythology” exposes the subtle ways discourse can universalize its own cultural stance; in Foner’s Who Owns History?, the question of ownership highlights the contested terrain of historical narratives. AI now stands at the crossroads of these critiques, simultaneously reifying exclusionary frameworks and presenting an opportunity—if radically restructured—to open the archive to suppressed voices. Yet if we fail to confront the political unconscious programmed into our machines, AI will continue to replicate the same oppressive illusions that Freud’s death drive and Derrida’s white mythology have long diagnosed.


Ultimately, we face a pressing imperative: either transform the archive’s logic or watch AI deepen our collective forgetfulness. If the latter prevails, then the potential for new, emancipatory histories—ones that Foner insists must keep evolving—may remain stifled by the digital superego. Our challenge, then, is to imbue AI with a form of cyborgian care that remembers rather than silences, fosters critical multiplicity rather than a single white mythology, and actively resists the death drive lurking at the heart of every archive.

 
 
 

Comentarios


The

Undecidable

Unconscious

Contact us

bottom of page