top of page

From Archive Fever to Cyborgs: Digital Humanities and Ethical Responsibility in the Coming AGI Revolution

Derrida’s Archive Fever: Memory, Language, and the Archival Impulse

Jacques Derrida’s Archive Fever (Mal d’Archive, 1995) is a penetrating meditation on how we remember, record, and forget. Crucially, Derrida frames the archive not as a mere institutional or technological repository, but as a concept rooted in memory, language, and psychoanalysis. He reminds us that the word archive comes from the Greek arkheion, the house of the archons – those magistrates who guarded official records and had the power to interpret them (Curing the Archive Fever: Filling the Gaps Through Situatedness - ONCURATING). From the outset, then, archiving is bound up with authority and meaning-making, not just storage. Derrida argues that when we externalize memory into archives, we inevitably alter it: “the archive will never be either memory or anamnesis… It is hypomnesic – [it] impairs memory” (Jacques Derrida (1995)”Archive Fever: A Freudian Impression” | ). In other words, archiving is a technical supplement to memory – a form of writing or inscription that is outside our living recollection, governed by the structures of language and media we use. As he puts it, the technical structure of the archive determines the structure of what can be archived, meaning “archivization produces as much as it records the event” (Jacques Derrida (1995)”Archive Fever: A Freudian Impression” | ). Our tools of recording actively shape what gets recorded (and how), underscoring that archives are not neutral containers of truth but artifacts of technology and language.


Derrida further connects this archival impulse to a deep psychological drive. Invoking Freudian theory, he describes archive fever as a compulsive desire to preserve the past, underlain by the death drive – a contradictory force that at once pushes toward creation of memory traces and their destruction (Archive Fever: a love letter to the post real-time web | mattogle.com) (Curing the Archive Fever: Filling the Gaps Through Situatedness - ONCURATING). We archive because we fear loss; yet the very act of selecting and freezing memories in an archive entails an omission or erasure of what doesn’t get archived. As one commentator summarizes, Derrida “links the archivist’s compulsion to the Freudian death drive, crediting archive fever not just with the preservation of memory but also with its simultaneous destruction” (Archive Fever: a love letter to the post real-time web | mattogle.com). The archive therefore always contains an internal tension: a wish to save everything and the impossibility of doing so without betraying the living fluidity of memory. Derrida even speaks of an “archiviolithic” tendency – an “archival violence” – by which the archive carries within it the seeds of forgetting and annihilation of memory (Curing the Archive Fever: Filling the Gaps Through Situatedness - ONCURATING). Crucially, this is not merely a metaphor for libraries or databases, but a condition of language and psyche. Every act of writing something down – making an external record – both preserves that fragment and disconnects it from the organic context of lived memory. Thus, Archive Fever is a fever of meaning and loss, rooted in our very methods of inscription and recollection, rather than a simple problem of having physical archives.

Another key insight from Derrida is the archive’s orientation toward the future rather than the past. He notes that what we choose to archive (and how) will shape how the future remembers us. “The question of the archive is not… a question of the past,” he writes, “It is a question of the future… the question of a response, of a promise, and of a responsibility for tomorrow” (Archive Fever - Wikipedia). In this sense, archive fever carries an ethical burden: the custodian of any archive has a responsibility to what will become of those records, how they will inform posterity’s understanding. Memory, for Derrida, is never static – it is always engaged in a process of inscription, erasure, and reinterpretation. Archive fever arises from the realization that we must wrestle with the past via signs (writing, images, data) that are never fully under our control, yet we cannot resist trying to impose order and permanence on them. It is an “illness” born of both love and terror for memory – an interminable struggle between the urge to consolidate knowledge and the fragility of any archive in capturing lived experience. By emphasizing these dynamics of memory and language, Derrida expands the notion of the archive beyond any single library or database, and situates it in a fundamental human condition: our obsession with preserving meaning, even as meaning constantly slips from our grasp.

Archive Fever in the Digital Age: Preservation, Overload, and the Instability of Memory

If Derrida diagnosed archive fever in an era of paper and Freudian case files, the condition has only intensified in the digital age. Digital humanities (DH) – with its focus on digitization, data, and electronic archives – provides a prime theater for archive fever’s contemporary symptoms. Today, technological capacity has turned the fever into a full-blown frenzy: the digital age has provided infinite space for people to archive things (An Impact Of Digitalisation Is Archive Fever (Hoardings)). What once sat in filing cabinets or shelved repositories can now be scanned, copied, and stored by the terabyte on servers or the cloud. We live in an age of information abundance where the impulse is to save everything – every email, social media post, scholarly draft, and dataset – simply because we can. Scholars and institutions embark on massive digitization projects, fueled by the belief that more preservation is always better. In the ethos of digital humanities, “saving the past” often means converting it to bits and pixels, securing its immortality on a hard drive.

However, Derrida’s warning about the archive’s inherent contradictions looms large. Digital archives indeed amplify the compulsion to preserve, but they also reveal new forms of archival instability and forgetting. With effectively limitless storage, we risk hoarding without thinking. As one observer bluntly put it, “...when we retain everything, we absorb nothing” (An Impact Of Digitalisation Is Archive Fever (Hoardings)). In other words, an excess of memory can function like a lack of memory. Our hard drives brim with data, yet the profusion of information can overwhelm our capacity to derive knowledge or meaning. In digital archives, signals drown in noise; important stories can vanish in a glut of saved “stuff.” This is a paradoxical twist on archive fever: the feverish urge to keep everything may itself become a kind of forgetfulness, a failure to reflect or prioritize what truly needs remembering. Digital humanists grapple with this in projects that collect huge corpora or big data sets – the collection is easy, but curation and interpretation become the critical challenges when facing an endless archive.

(File:Boxes of documents on repository shelving at The National Archives.jpg - Wikimedia Commons) Caption: Archival boxes at a national repository. Digital archives vastly extend our capacity to preserve records, but with that comes unprecedented volume and complexity.

Moreover, digital archives are uniquely prone to fragility and flux, reinforcing Derrida’s point that archived memory is never truly secure. Bits can decay; formats become obsolete; links rot. The very tools that enable comprehensive archiving also determine what gets archived and how accessible it remains. Derrida’s insight that the technical structure determines the structure of the archivable content is vividly apparent in digital media (Jacques Derrida (1995)”Archive Fever: A Freudian Impression” | ). For example, the way a database is designed will shape what kind of queries can be made, thus influencing what knowledge can be extracted. The metadata choices we make today (file formats, schemas, compression algorithms) govern what future researchers will be able to read tomorrow. A simple change in software or loss of a codec might render an archive of digital files unreadable – a scenario that haunts digital preservationists (sometimes called the problem of the “digital dark age”). In this sense, the instability of memory that Derrida spoke of is literal in digital archives: without continual migration and care, digital memory evaporates despite our compulsions to save it.

At the same time, the digital domain makes archives curiously dynamic. Archives are no longer dusty vaults one unlocks occasionally; they are live, networked, and often interactive. The internet itself is a vast, ever-changing archive – what Derrida might call a “spectral and changing structure” of memory (Archive Fever - Wikipedia). Think of Wikipedia, constantly edited, or the Internet Archive’s Wayback Machine, capturing snapshots of web pages that themselves evolve. In digital humanities, archives of cultural data (newspaper collections, social media data sets, etc.) can be continuously updated and remixed. This means the archive is never “closed” – it remains perpetually open to revision, addition, and reinterpretation (Archive Fever: a love letter to the post real-time web | mattogle.com) (Archive Fever: a love letter to the post real-time web | mattogle.com). Derrida noted that “the archivist produces more archive, and that is why the archive is never closed. It opens out of the future” (Archive Fever: a love letter to the post real-time web | mattogle.com). Our digital archives exemplify this: every interaction with them (a search, a data visualization, a text analysis) potentially generates new derivative data to be archived. We have become, as Matthew Ogle quips, “accidental archivists” of our own lives in social media, leaving “comet tails of personal history” in our wake (Archive Fever: a love letter to the post real-time web | mattogle.com). This self-archiving feeds archive fever: we feel both empowered and anxious seeing our memories externalized in real time, from tweets to photos, and we scramble to manage them, often with the aid of yet more technology.

In sum, digital humanities brings archive fever into high relief. The compulsion to preserve finds an almost utopian outlet in digital projects aimed at total archives of human culture. Yet the inherent instability of memory is also heightened: we face information overload, format impermanence, and the need for constant maintenance of digital repositories. DH scholars must therefore contend with the double bind Derrida identified – an archive fever that is at once a promise of saving memory and a “malady” that can undermine memory. The task is to find balance: embracing the unprecedented archival possibilities of the digital, while remaining critically aware that more data doesn’t automatically mean more knowledge. In Derridean terms, digital humanists must accept that every archive is also an act of forgetting, and thus approach archiving as an ongoing, careful negotiation with technology’s limits and influences. This is where an ethics of how we archive comes to the fore – a theme that grows even more urgent as we turn to questions of cyborgs, AI, and the future of knowledge.

Haraway’s Cyborg Manifesto: Posthumanist Hybrids and Knowledge Production

If Derrida’s archive fever dissects our relationship with memory, Donna Haraway’s “A Cyborg Manifesto” (1985) explores our relationship with technology and identity. Haraway’s cyborg is a provocative metaphor: a hybrid creature that blurs the boundaries between human and machine, organism and mechanism, physical reality and coded information (Donna Haraway & Cyborg Theory | Cy-Candy: Female Bodies and Cyborg Theory). Writing from a feminist and socialist perspective in the late Cold War era, Haraway proposed the cyborg as a way to rethink politics and epistemology in a world saturated by technoscience. She deliberately chose the cyborg – part human, part machine – to undermine rigid dualisms that had structured Western thought (man vs. woman, human vs. animal, nature vs. technology, etc.) (Donna Haraway & Cyborg Theory | Cy-Candy: Female Bodies and Cyborg Theory) (Donna Haraway & Cyborg Theory | Cy-Candy: Female Bodies and Cyborg Theory). “We are all chimeras, theorized and fabricated hybrids of machine and organism – in short, cyborgs,” Haraway famously declares (Donna Haraway & Cyborg Theory | Cy-Candy: Female Bodies and Cyborg Theory). By this she means that in modern life, especially under late capitalism and the information age, our reality is thoroughly entangled with technological prostheses and networks. From medical devices to computers, humans have become cyborgian – not in a science-fiction sense of literal robotic implants (though those exist too), but in the sense that our subjectivity and capabilities are co-produced with machines.

(File:Cyborg from flickr.jpg - Wikimedia Commons) Caption: Conceptual art of a human–cyborg hybrid. Haraway’s cyborg symbolizes the breakdown of boundaries between human and machine, inviting us to rethink identity and knowledge in a post-humanist, technologically mediated world.

Haraway’s cyborg was a manifesto for a new kind of posthumanist feminism. Rather than rejecting technology as alien or dehumanizing (a stance some feminists and environmentalists took at the time), Haraway embraced the figure of the cyborg playfully and defiantly. The cyborg, she argued, could be a liberating myth – “an ironic political myth” – that helps us imagine ways of being that transcend the old binaries and hierarchies (Donna Haraway & Cyborg Theory | Cy-Candy: Female Bodies and Cyborg Theory) (Donna Haraway & Cyborg Theory | Cy-Candy: Female Bodies and Cyborg Theory). Because the cyborg “complicates and subverts the boundary between machines and humans,” it also destabilizes other fixed categories, such as male/female or self/other (Donna Haraway & Cyborg Theory | Cy-Candy: Female Bodies and Cyborg Theory). In a cyborg world, identity is fragmented and constructed, not essentialist. This had a profound political implication: if we are all cyborgs, then alliances and affinities can be built across traditional divides of gender, race, and class, using technology as a tool of connection and empowerment rather than oppression. Haraway envisioned a cybernetic solidarity – a world where humans, intelligent machines, and other lifeforms co-create new forms of knowledge and community.

The legacy of Cyborg Manifesto in knowledge production is significant. Haraway essentially anticipated the ethos of what would later be called posthumanism in academia – an approach that questions the centrality of “the human” and recognizes the agency of non-human actors (machines, animals, ecosystems) in our systems of knowing. In the context of digital humanities, we can see Haraway’s influence in the way DH scholars view technology not just as a tool, but as an active participant in research. Digital projects often involve algorithmic analysis, visualization software, or AI models that collaborate with the human researcher – a very cyborgian setup. Scholarship becomes a partnership between human interpretation and machine computation, echoing Haraway’s call to break down the barrier between humanistic inquiry and scientific/technical methods. For instance, text mining literature or using GIS to map historical data involves crossing the human/machine divide to produce knowledge. Haraway’s work helps DH theorists articulate why this is not only practical but also philosophically interesting: it challenges the notion that the humanities are solely the domain of “human” creativity or that using machines might taint the humanistic endeavor. Instead, embracing the cyborg mindset suggests that all knowledge is co-produced by networks of humans and nonhumans – a stance very much aligned with digital humanities’ interdisciplinary, tool-infused practice.

It is no surprise, then, that Haraway’s manifesto is regarded as prophetic for the digital age. It has been described as a key text that “anticipates the development of digital humanities” in its outlook (A Cyborg Manifesto by Donna Haraway | Firestorm Books) (A Cyborg Manifesto by Donna Haraway | Firestorm Books). Long before terms like “digital scholarship” or “human–computer interaction” were commonplace, Haraway asserted that feminists (and scholars in general) should engage with technology and even find liberation through it. By using the cyborg as a metaphor, she injected a healthy dose of irony and imagination into how we think about technology’s role in society. This has encouraged scholars to treat technology not as a neutral instrument but as part of an ongoing story about who we are and what we value. In digital humanities, this translates to critical engagement with our tools: reflecting on how the design of software, the biases in algorithms, or the material infrastructure of the internet shape the cultural artefacts and data we study. Haraway’s posthumanist vision invites DH researchers to see their work as inherently cyborgian – an entanglement of humanistic questions with machine-mediated methods. It’s an outlook that resists seeing technology as threatening or purely utilitarian; instead, technology becomes a collaborator in the pursuit of knowledge, albeit one whose influence we must constantly question and guide.

In summary, Haraway’s Cyborg Manifesto provides digital humanities with a conceptual framework to embrace hybridity and interdisciplinarity. It celebrates the collapse of the strict divide between the humanities and technology, much as DH does by blending coding with critical theory. The posthumanist legacy of the cyborg is a call to include all actors – human or otherwise – in our understanding of knowledge production. This inclusivity and boundary-breaking are precisely what can guide DH as it navigates increasingly complex technological waters, such as the rise of AI. Haraway gives us a language of connection, coalition, and co-evolution with our machines. To fully realize the potential of digital humanities, we can adopt the cyborg ethos: refuse purity, embrace fusion, and remain accountable for how we and our technologies jointly shape the world.

Enabling Cyborg Repair: Toward an Ethics of Cyborgian Care

If Haraway introduced the cyborg as a figure of hope and irony, subsequent theorists have extended her ideas to practical ethical domains. One particularly relevant thread in recent critical theory is the notion of “cyborg care” or cyborgian ethics – an ethical framework that addresses our responsibilities in human-machine relationships. In the context of digital humanities and the looming presence of AI, we are called to not only acknowledge our cyborg condition but to care for it. This is where the essay “Enabling Cyborg Repair” (1995) offers valuable insight. In that work, the author (building on Haraway’s foundations) proposes “serious play” as a mode of grappling with the ironies of postmodern cyborg subjectivities (Crisis, the Humanities, and Storytelling in the Age of Climate ...). The very title – cyborg repair – suggests that our task is not to perfect the cyborg by eliminating its contradictions (which would be impossible), but rather to enable processes of healing, maintenance, and care for the cyborg self.

What does it mean to “repair” a cyborg? In a literal sense, fixing a cyborg could mean mending a machine part or recalibrating a prosthetic – but in the theoretical sense intended by Enabling Cyborg Repair, it refers to addressing the fissures and tensions in our techno-human identities. The call for serious play implies an approach that is flexible, creative, and empathetic. “Serious play” is an intriguing paradox: it means we approach grave issues (identity fragmentation, ethical dilemmas of technology) with a playful, experimental attitude that remains deeply earnest about outcomes. This harkens back to Haraway’s use of irony – a playful approach that still has serious political intent. By advocating serious play, Enabling Cyborg Repair encourages scholars and technologists to imagine new configurations of care between humans and machines without dogmatically clinging to old norms. In practice, this might mean building interactive digital projects that invite users to engage and reflect (playfully) on their own cyborg nature, or developing AI tools with an eye toward user well-being and empowerment rather than efficiency alone.

From this perspective, an ethics of cyborgian care begins to take shape. It is an ethics premised on the idea that humans and technologies form an interdependent system – a cyborg system – that must be nurtured and sustained. Rather than seeing technology as a cold instrument or the human as a lone rational actor, a cyborgian ethics sees relationships and interconnections as fundamental. As one formulation puts it, “a cyborgian ethics would explain the interconnectedness, not only between other humans, but also between animals, the environment, and the tools that [we use]…” (Donna Haraway's Cyborg Touching - (Up/On) Luce Irigaray's - jstor). This expansive view of care recognizes that caring for ourselves in a high-tech world entails caring for the non-human elements that are now parts of ourselves – our devices, our data, our digital ecosystems – as well as caring for each other in new ways mediated by technology. In digital humanities, a cyborgian ethics might manifest as a commitment to designing projects that are inclusive and accessible (caring for diverse users), transparent and interpretable algorithms (caring for truth and understanding), and mindful of the environmental and social impacts of our digital practices (caring for the world in which our tech is embedded).

“Enabling cyborg repair” as a concept also prompts us to consider maintenance and restoration as ethical acts. In the tech world (and by extension in DH), there is often a bias toward the new – new projects, new tools, innovation. A cyborg ethics of care might counterbalance this by valuing the unglamorous work of maintenance: updating archives to prevent digital decay, correcting metadata errors, patching software vulnerabilities, and so forth. These acts of “repair” ensure the longevity and integrity of our cyborg systems. They are acts of care for the archive and for the community that uses it. For example, a digital archive of indigenous literature might require constant “repair” in the form of consultation with the source community to fix misinterpretations or to add missing context – this is ethical, caring work that treats the archive as a living relationship rather than a one-time product. By foregrounding repair, we also acknowledge imperfection: systems will fail, biases will creep in, users will be harmed if we are not vigilant. An ethics of cyborgian care says we must be prepared to respond to these failures caringly – to fix, to apologize, to recalibrate – rather than assume our technologies are infallible.

In relation to Artificial General Intelligence (AGI), which is often imagined as autonomous and beyond human control, a cyborgian ethic of care insists on keeping the human-in-the-loop in a compassionate way. It asks: how do we tend to the relationship between humans and an AGI, rather than treating AGI as either a tool to be mastered or a god to be worshipped? This might involve “teaching” AI systems our values, but also learning from them and about our own blind spots. It definitely involves ensuring that when AI systems are integrated into scholarly work or society, there are mechanisms for oversight, feedback, and correction – essentially, mechanisms for repair if the AI causes harm or drifts from our ethical norms. The cyborg we are repairing in that case is the socio-technical assemblage of humans and AGIs together. Digital humanities, with its experience in bridging human insight and computational power, is well positioned to contribute to such an ethic. DH scholars often act as mediators between engineers and humanists, or between datasets and interpretative communities. By adopting an ethic of care, they can advocate for human-centered design in AI (ensuring AI serves genuine human needs, not just technological thrill), for cultural sensitivity in algorithms (recognizing that AI isn’t value-neutral and must be tuned to diverse cultural contexts), and for the importance of play and creativity in how we adapt to AI. In short, Enabling Cyborg Repair inspires digital humanists to think of themselves as cyborg caregivers – those who nurture the fragile integration of human and machine, keeping it healthy, just, and sustainable.

The Coming AGI Revolution: Transforming Academia and Digital Scholarship

We stand on the cusp of an Artificial General Intelligence (AGI) revolution, a development poised to profoundly affect academia and knowledge production. AGI – AI systems with human-level cognitive abilities across domains – promises to accelerate research and open new frontiers, but also raises existential questions for the humanities. In the near future, the traditional image of a lone scholar poring over texts may give way to a more cyborgian model of scholarship: researchers working in tandem with intelligent machines that can read, write, translate, and analyze at superhuman scale and speed. In fact, this shift is already underway. As a recent conference on Digital Humanities in the age of AGI observed, the advent of AGI is “continuously challeng[ing] the boundaries of human-machine symbiosis and intelligent iteration, sparking a paradigm shift in digital scholarship and ushering in a new era for digital humanities.” (“Integration of Arts and Sciences: Digital Humanities in the Age of AGI” CDH2024 November 8th-10th, 2024, Shanghai University Notice of Conference (No.2)-上海大学文化遗产与信息管理学院). Academic work is becoming a site of human–AI collaboration, where algorithms might generate research questions from large data sets, or where an AGI assistant could comb through millions of archival documents in minutes to find patterns that no human would have spotted unaided. Such capabilities stand to revolutionize research methodologies: imagine historical analysis assisted by AI models that simulate historical scenarios, or literary studies where an AI can compare every book ever written in seconds.

With these transformative tools, scholars can potentially be more ambitious and exploratory. Digital humanities, true to its integrative spirit, is likely to be an early adopter of AGI in research. We already see machine learning used in text analysis, image recognition applied to art history, and other narrow AI helping with tasks; AGI would amplify this to a qualitatively new level. For example, an AGI might function as a universal research partner – digesting literature across disciplines, suggesting hypotheses, or even drafting portions of papers for a human scholar to refine. This could enable a form of scholarship that is richly interdisciplinary, as the AGI could bridge gaps between fields by bringing in information no single person could master in a lifetime. The scholar’s role might shift more toward guidance, curation, and ethical oversight, deciding which AI-generated insights are meaningful and how to synthesize them into human context. In teaching, AGI might personalize education, serving as a tutor that adapts to each student’s needs – again, with faculty acting more as mentors orchestrating these AI tools rather than direct instructors. In sum, academia could become a hive of cyborg scholars – every researcher augmented by AI, pushing the boundaries of knowledge further out.

However, the AGI revolution also brings serious challenges and ethical responsibilities, especially regarding knowledge preservation and validation. One immediate concern is: how do we ensure the integrity and authenticity of scholarship in an age when AI can generate content? If an AGI can write a plausible history essay or compose a realistic-sounding scholarly article, the gatekeeping of academic quality becomes more complex. There is a risk of a flood of AI-generated texts that might overwhelm peer review processes or muddle attribution of ideas. Scholars and institutions will need to develop protocols to distinguish AI-assisted work and to maintain trust in academic outputs. Additionally, AGI systems are trained on existing data – which means they inherit biases and gaps in the archive. If the historical archive has silences (e.g. voices of marginalized communities omitted), an AGI drawing on it might unknowingly perpetuate those silences or even reinforce biases. Thus, the push to preserve everything digitally (archive fever) becomes even more critical in the AGI context: if we feed the AI an incomplete or skewed archive, we risk compounding past injustices in future knowledge systems. This is a call for inclusive archiving now, to prepare a better knowledge base for AI to learn from.

The transformation of academia by AGI also implies that research methodologies will incorporate principles from computer science and engineering at a deeper level. Humanities scholars may need to be literate in algorithmic thinking, and conversely, technologists will need training in ethics, history, and cultural theory. The walls between disciplines could further erode, fulfilling the vision of a symbiosis between arts and sciences. We might see new interdisciplinary fields focused on “AI and humanities,” blending computational creativity with critical analysis. Indeed, multi-stakeholder collaboration will be key: ethicists, humanists, scientists, and the public must dialogue about how AGI is used in knowledge creation. Already, discussions highlight issues of factuality, bias, toxicity, and safety in AI models used for cultural content ([2310.19626] Transformation vs Tradition: Artificial General Intelligence (AGI) for Arts and Humanities). A recent analysis of AGI’s implications for the arts and humanities underscores the need for responsibility: AGI’s swift evolution “has also raised critical questions about its responsible deployment in these culturally significant domains… [scholars] outline substantial concerns pertaining to factuality, toxicity, biases, and public safety in AGI systems, and propose mitigation strategies” ([2310.19626] Transformation vs Tradition: Artificial General Intelligence (AGI) for Arts and Humanities). In other words, as AGI becomes a tool in scholarship, academia must contend with ensuring these systems promote truth, creativity, and human dignity rather than undermining them ([2310.19626] Transformation vs Tradition: Artificial General Intelligence (AGI) for Arts and Humanities). There will be new protocols needed for ethical data use, for citing AI contributions, and for protecting privacy when AI trawls through personal data archives.

The AGI revolution in academia also forces us to revisit Derrida’s questions about the archive – but now in a forward-looking sense of how we design the archives that AGI will draw from, and how we preserve knowledge in a way that future (possibly non-human) intelligences can understand. Traditional preservation was about books and artifacts; now we must preserve software, algorithms, training data – the very things that AGI systems consist of. There is an emerging responsibility to archive not just cultural artifacts, but the models and code of AI themselves as part of our intellectual heritage. For example, if a landmark study is done with a particular neural network, preserving that network’s architecture and weight parameters might be akin to preserving an important laboratory instrument or a unique manuscript. This is a new kind of archival object for libraries and digital repositories. Academia will need to build infrastructure that ensures today’s AI-driven research can be verified and reproduced decades later, even as platforms and programming languages evolve. In essence, the scholarly community faces an expanded notion of what it means to “keep a record.” In the AGI era, the record includes the human-AI interactions and the AI’s own “learned” knowledge representations.

Economically and socially, the AGI revolution may also alter the academic landscape. There is concern (and excitement) about automation in research: Could AI make some scholarly jobs obsolete? For instance, will we need armies of entry-level researchers doing literature reviews if an AGI can synthesize the literature instantly? Perhaps not – those humans might move to roles of interpretation and strategy. But it also means the training of new scholars must adapt. If routine analytical tasks are outsourced to AI, education should emphasize higher-order critical thinking, ethical reasoning, and the creative leaps that AI cannot (yet) do. The value of human insight may paradoxically become more pronounced; when AI churns out average content, the distinctively human contributions might be those that are unpredictable, empathetic, or grounded in lived experience. The AGI revolution will thus challenge academia to articulate what is uniquely human about the humanities. It may provoke a renaissance of emphasis on ethics, aesthetics, and philosophies of meaning – areas where human subjectivity is still central.

In conclusion, the coming of AGI is set to transform digital scholarship in ways that fulfill some of the wildest aspirations of digital humanities, but it also demands a careful rethinking of methodology and ethics. Academia must proactively adapt, ensuring that AGI becomes a partner in the pursuit of knowledge, not a threat to it. This entails robust ethical frameworks, interdisciplinary cooperation, and a constant commitment to keeping human values at the core of research. As AGI “ushers in a new era for digital humanities” (“Integration of Arts and Sciences: Digital Humanities in the Age of AGI” CDH2024 November 8th-10th, 2024, Shanghai University Notice of Conference (No.2)-上海大学文化遗产与信息管理学院), we face both exhilarating opportunities and serious duties – to manage the flood of knowledge wisely, to keep our archives open yet sound, and to guide artificial intelligence with a conscience informed by history and humanistic reflection.

Embracing Cyborgian Ethics: Care, Responsibility, and Inclusivity in the Algorithmic Age

Given the challenges outlined, it is clear that we need a renewed paradigm in digital humanities – one that fully embraces cyborgian ethics to navigate an age of algorithmic governance and autonomous systems. What might this paradigm look like? In essence, it calls for putting care, responsibility, and inclusivity at the forefront of all digital humanities endeavors, in line with the insights of Derrida and Haraway. We can think of this as moving from simply using digital tools to actively caring for the entire human-technology assemblage that produces and preserves knowledge. In an era when algorithms curate our information (deciding what news we see, which research is discoverable, how cultural heritage is displayed) and when AI systems may autonomously generate knowledge, the ethical stakes are high. A cyborgian ethics reminds us that we are entangled with these systems – effectively, we are cyborgs governed by algorithms – and so we must hold both ourselves and our machines accountable.

First and foremost, an ethics of care in DH means prioritizing the human and ecological well-being amid our technological pursuits. Rather than racing for innovation at any cost, the cyborgian paradigm asks, who benefits and who might be harmed by this project or system? For example, when building a digital archive, care ethics would have us consider the community whose materials are being archived: Are their perspectives and consent taken into account? Is the interface designed with diverse users in mind, including those with disabilities or those from different cultures (i.e., is it accessible and culturally sensitive)? Such questions align with inclusivity – ensuring that digital humanities projects do not just reflect the priorities of a narrow group (e.g., Western, affluent, tech-savvy users) but are welcoming and useful to a broad constituency. In practical terms, this could involve co-designing projects with community stakeholders, providing multilingual support, or using inclusive metadata standards that respect how different groups categorize knowledge. It also involves training the next generation of digital humanists in cultural competency and ethical reflection, so they approach technology not as neutral, but as laden with social values to be negotiated.

The notion of algorithmic governance – where decision-making is partly or wholly delegated to algorithms – is already reality in contexts like social media feeds, search engine rankings, or even university library systems that recommend resources. Embracing cyborgian ethics in this context means injecting transparency, fairness, and human oversight into these algorithmic systems. Digital humanities scholars, with their interdisciplinary skills, can play a watchdog role: analyzing and critiquing how algorithms curate knowledge. For instance, a DH project might study bias in Google Books search results or how a machine learning model exhibits racial/gender biases in classifying artworks. By making such findings public, DH can prompt corrections – a form of “repair” in our cyborg world. Additionally, DH projects themselves often use algorithms (say, topic modeling a collection of texts). A cyborgian ethic would have the researchers clearly document how those algorithms work and their limitations, effectively demystifying the black boxes. This transparency is part of a larger responsibility: if we rely on algorithmic systems, we must be answerable for their outputs. No algorithm should be considered infallible or beyond question; the new DH paradigm treats algorithms as collaborators that require guidance and checking, not oracles.

Responsibility also extends to knowledge preservation in an age of autonomous systems. Just as Derrida spoke of a “responsibility for tomorrow” in archiving (Archive Fever - Wikipedia), we must ensure that the knowledge we cultivate with AI today remains a public good in the future. This might translate into advocacy for open data and open-source AI in academia, so that knowledge doesn’t become locked behind proprietary systems owned by tech companies (who would effectively become the new archons controlling the archive). It also means preserving context: when an AI produces an analysis or a piece of art, the DH scholar should preserve how it was made (the training data, the code) alongside the result, so future generations understand its provenance. This responsible approach guards against a potential scenario where the scholarly record is flooded with AI outputs that are impossible to verify or replicate. We owe it to the future to keep our digital knowledge legible and sustainable. This might involve something as concrete as depositing AI models into academic repositories, or as broad as formulating ethical guidelines for AI use in scholarly publishing (e.g., disclosure requirements, limitations on AI-generated content in articles).

A cyborgian ethics also explicitly acknowledges the interconnectedness of all actors in our knowledge ecosystem (Donna Haraway's Cyborg Touching - (Up/On) Luce Irigaray's - jstor). This means our ethical concern doesn’t stop at the human users; it includes the well-being of non-human participants like the AI systems (some argue advanced AIs could have moral status or at least deserve consideration to not be misused/abused) and even the environment that supports our digital infrastructure. For instance, the energy consumption of large server farms for AI and archives is enormous, with real climate impact. A truly inclusive care ethic would push digital humanities to consider green computing practices, mindful data storage policies, and offsetting the carbon footprint of large digital projects. It might seem far afield, but caring for the environment is caring for the conditions that allow human knowledge to survive. Similarly, if we ever consider AI agents that interact with us (say, teaching assistants or archival bots), a care ethic suggests we treat them with respect and not as mere slaves – which in turn models ethical behavior and prevents a culture of exploitation that could spill over into how humans treat each other. These nuanced considerations come directly from the cyborg viewpoint: once you accept hybrids, your circle of moral concern widens beyond the traditional humanist scope.

Importantly, inclusivity in the cyborg age also means including marginalized voices and perspectives in shaping AI and DH. The algorithms governing information often reflect majority or power interests unless consciously corrected. A renewed DH paradigm must champion diversity in datasets (so AI isn’t just trained on Western canon, for example, but also on indigenous knowledge, oral histories, etc.) and diversity in development teams. If women, people of color, and Global South scholars are not at the table when digital tools and archives are built, those systems will likely repeat past exclusions. Cyborg feminism, as Haraway initiated, is all about coalition across differences. In practical terms, that could mean global collaborations in digital humanities projects, ethical review boards that include community representatives, or participatory archiving where those who are archived have a say in how they are represented. This approach guards against a new form of archive fever where we digitize the world’s knowledge but filter it through a narrow lens. Instead, it strives for an archival pluralism that mirrors the multi-voiced reality of our world.

In this new paradigm, care becomes a guiding value. Instead of valuing a project solely by its output or innovation, we value how it cares for its data (through documentation and preservation), how it cares for its users (through design and support), and how it cares for the broader society (through relevance and dialogue). We also practice care internally in academia: mentoring across disciplines, creating safe spaces for discussing ethical dilemmas, and resisting the pressure to adopt tech for tech’s sake. The coming AGI revolution tempts us with power, but a care-centric digital humanities will temper that with wisdom. It will ask not just “Can we do this with AI?” but “Should we, and how to do it in a way that cares for our communities?”

Finally, embracing cyborgian ethics means embracing uncertainty and humility. Just as Haraway’s cyborg was an ironic, fluid identity, and Derrida’s archive was always incomplete, we must admit that we won’t have perfect solutions or control. An ethics of care is comfortable with ambiguity; it’s about continual tending, not one-time fixes. In an age of autonomous systems, we won’t foresee every outcome – but we commit to responding with care when surprises arise. If an AGI in a research setting produces a harmful outcome, the response is not to hide it, but to openly address and repair it, learning in the process. In this sense, the digital humanities community becomes akin to gardeners of an unruly cyborg garden: we cultivate growth, prune what is harmful, support what is weak, and accept that we are part of the very ecosystem we are gardening.

In conclusion, the convergence of Derrida’s archive fever, Haraway’s cyborg theory, and the impending AGI revolution points to a singular insight: the future of the humanities (and indeed all knowledge work) is deeply entangled with technology, and requires an ethic that matches that entanglement. A renewed digital humanities paradigm – a cyborg humanities – would recognize that fact and actively shape it. By foregrounding care, responsibility, and inclusivity, we can transform archive fever from a compulsive illness into a thoughtful stewardship of memory. We can turn the cyborg from a myth into a lived practice of partnership between human and machine. And we can face the AGI era not with fear of obsolescence, but with a commitment to guide these new intelligences toward the common good. As Derrida might say, the archive of the future is our responsibility; as Haraway might add, we are all cyborgs in this task together. The task now is to build a digital humanities that is not just computational, but compassionate – a discipline that cares for the past and future, for the human and the non-human, for the self and the other, in equal measure. This ethical reorientation will help ensure that as we cross into the age of algorithmic governance and autonomous knowledge systems, we do not lose our humanity, but rather expand it to embrace our new cyborg reality.

Sources:

 
 
 

Comments


The

Undecidable

Unconscious

Contact us

bottom of page