top of page

G4 - AI Ethics, Chaos Theory, Natural Language Models, and the Denial of Undecidability: Working Through the Misnomers/Symptoms

The deliberately provocative title of this course serves as a critical starting point—highlighting precisely the symptomatic nature of how we name and understand complex phenomena within technology, language, and ethics. From a psychoanalytic perspective, particularly drawing upon Freud and Lacan, "working through" (Durcharbeitung) involves engaging rigorously and repeatedly with a symptom or symbolic conflict, moving beyond mere intellectual insight toward a deeper subjective transformation. This process unfolds within the transference, where the symptom manifests in the analytic relationship, allowing the subject to confront and reconfigure the unconscious structures sustaining their suffering. In Lacanian terms, this often entails "traversing the fantasy"—dismantling the misrecognitions that shape the subject’s desire, rather than simply replacing neurosis with conscious resolution. As Freud put it, "Wo Es war, soll Ich werden" ("Where id was, there ego shall be"), not in the sense of eliminating the unconscious, but in restructuring the subject’s relation to it, allowing for a shift in psychic economy and desire. In psychoanalysis, health is never a simplistic absence of symptoms, but rather a continuous and active confrontation with the symptomatic formations of the unconscious, which always resist neat categorization and easy resolution.


The deliberately provocative title of this course serves as a critical starting point—highlighting precisely the symptomatic nature of how we name and understand complex phenomena within technology, language, and ethics. From a psychoanalytic perspective, particularly drawing upon Freud and Lacan, to "work through" means to engage rigorously and repeatedly with a symptom or symbolic conflict, moving beyond mere intellectual insight toward a deeper subjective transformation and reconfiguration of meaning. In psychoanalysis, health is never a simplistic absence of symptoms, but rather a continuous and active confrontation with the symptomatic formations of the unconscious, which always resist neat categorization and easy resolution, and so always return.


Within this course, we approach the misnomers in its very title—"AI Ethics" as implying moral agency within computational systems rather than human ethical responsibility, "Chaos Theory" as disorder rather than a form of complex, emergent order, and "Natural Language Models" as genuinely natural rather than algorithmically unnatural—as symptomatic manifestations. Both "AI Ethics" and "Natural Language Models" reflect what Joseph Weizenbaum identified as the ELIZA Effect: our persistent psychological tendency to project meaning, intentionality, and consciousness onto systems that merely simulate these qualities. In contrast, the misnaming of "Chaos Theory" signifies a symptom of the denial of undecidability, expressing a broader cultural fantasy of deterministic predictability and stable order in complex systems. Collectively, these misnomers function as symptoms that point directly to deeper, unconscious desires to deny the complexity and undecidability at the heart of human subjectivity, ethics, and meaning-making.


Just as psychoanalytic treatment involves "working through" unconscious symptoms to arrive at greater psychological health—a health defined by one's capacity to tolerate uncertainty, ambiguity, and the irreducible complexity of desire—this course encourages students to rigorously "work through" the symptomatic illusions inherent in contemporary technological discourse. Students confront not only the illusions fostered by computational determinism but also the profound ethical implications that arise from the widespread denial of undecidability and the unconscious. By engaging closely with philosophical, psychoanalytic, and ethical theories—particularly through the works of Freud, Lacan, Derrida, Plotnitsky, and Weizenbaum—students actively develop their capacity to recognize and resist simplistic narratives of technological inevitability and reductive understandings of human thought and subjectivity.


Throughout this active "working through," students deepen their appreciation of what genuine psychological and ethical health entails: a mature acceptance of undecidability as fundamental to the human condition, a recognition of the irreducible complexity of meaning-making, and a thoughtful responsibility toward the ethical deployment and governance of technology. In doing so, students emerge better prepared not only to critically navigate the ethical challenges posed by AI and computational systems, but also to foster societal health—defined precisely as the capacity to engage meaningfully, ethically, and responsibly with the inherent undecidability of human experience.


Course Overview

Designing an AI Ethics course for STEM students at Oregon State University presents a distinctive opportunity to bridge technical mastery with philosophical depth. My course, “AI Ethics, Chaos Theory, Natural Language Models, and the Denial of Undecidability”, engages engineering and computer science students by profoundly connecting abstract ethical theories and philosophical critiques to concrete, real-world technological scenarios. At the heart of this course is a critical examination of what it means to "think"—and the foundational distinction between meaning-making and computational processing.


AI Lacan
AI Lacan

Artificial Intelligence, particularly as embodied in contemporary Natural Language Models (NLPs), appears to engage in meaningful interactions, convincing many that AI systems "think." But does thinking require meaning-making? If, as philosophers and cognitive scientists argue, genuine thinking necessitates intentionality, context-awareness, and the interpretation of signs and symbols within a framework of lived experience, then current AI systems clearly fall short. AI processes vast amounts of data, generates coherent textual outputs, and mimics human-like conversations through statistical patterning, but these activities lack authentic meaning-making. AI, fundamentally, manipulates symbols without semantic comprehension, intentionality, or self-awareness.


This distinction between genuine thought and computational mimicry becomes particularly critical when analyzing the ethical implications of AI technologies. Misunderstanding AI's computational simulations as genuine meaning-making leads to what philosopher Joseph Weizenbaum termed the "ELIZA Effect": the human tendency to attribute understanding and intentionality to AI systems purely based on their linguistic fluency and convincing interactions. This effect becomes dangerous when scaled to areas such as surveillance, autonomous driving, and algorithmic decision-making, where the false attribution of moral agency to AI can obscure genuine ethical responsibilities.


Throughout this course, we leverage philosophical methods, including Socratic questioning, conceptual analysis, and critical debate, to confront these complexities systematically. Drawing on interdisciplinary insights—from Arkady Plotnitsky’s notions of undecidability and complementarity to Lacanian psychoanalysis, Derridean philosophy, and contemporary critiques of technological determinism—we interrogate the assumptions underlying AI ethics discourse. Is AI genuinely capable of moral reasoning, or does the phrase "AI ethics" itself obscure human accountability by assigning moral agency to a fundamentally non-conscious entity?


By aligning rigorous philosophical inquiry directly with technological applications, we guide students to recognize the inherent undecidability at the heart of meaning, ethics, and human subjectivity, and to see clearly through the illusions fostered by complex computational systems. The goal is not merely theoretical—it is deeply practical. Students emerge from the course prepared to ethically navigate technological innovation, fully aware that meaningful ethical responsibility resides solely with human actors who design, deploy, and regulate AI systems, rather than within the AI itself.


The course is divided into interconnected modules, each designed to foster critical thinking, ethical reasoning, and conceptual clarity regarding AI, its ethical dimensions, and the limitations inherent in computational modeling.


Module 1: Introduction—The ELIZA Effect, AI, and Human Projection

In this foundational module, students begin their critical exploration of AI ethics by engaging deeply with the psychological and philosophical phenomenon known as the ELIZA Effect, first articulated by Joseph Weizenbaum in his seminal text, Computer Power and Human Reason. Through careful reading and reflective discussion, students examine how humans project consciousness, intentionality, and ethical agency onto computational systems that merely simulate conversational fluency and emotional resonance.


Students will critically analyze Weizenbaum’s original exploration of the ELIZA Effect, paying special attention to the human tendency toward anthropomorphism—the attribution of human-like thoughts, feelings, and intentions to non-human entities. Using contemporary chatbots and conversational AIs such as ChatGPT, Replika, and virtual assistants (e.g., Siri, Alexa), students will apply Weizenbaum’s critique to real-world technological examples. This approach highlights the ethical illusions created by advanced computational simulations, demonstrating how linguistic coherence and apparent responsiveness in AI systems can lead users into mistakenly attributing genuine subjectivity or ethical sensitivity to machines.


Interactive class sessions include rigorous Socratic seminars and reflective group exercises designed to critically assess human perceptions of AI consciousness and ethical capability. Students will systematically deconstruct contemporary narratives surrounding AI systems, identifying implicit assumptions, misconceptions, and potential ethical consequences arising from mistaking simulated conversational fluency for genuine thinking, consciousness, or moral understanding.


The module culminates in an analytical assignment requiring students to clearly articulate the philosophical, psychological, and ethical dimensions of the ELIZA Effect. Students must demonstrate a sophisticated understanding of how anthropomorphic projection onto computational systems affects societal attitudes toward AI ethics, policy decisions, and technological governance. In doing so, students will propose strategies for responsibly communicating the non-conscious, computational nature of AI systems, emphasizing the ethical importance of maintaining clear distinctions between genuine human subjectivity and algorithmic simulation.


Readings:

  • Joseph Weizenbaum, selections from Computer Power and Human Reason

  • Sherry Turkle, excerpts from Alone Together: Why We Expect More from Technology and Less from Each Other

  • Hubert Dreyfus, excerpts from What Computers Still Can't Do (focusing on AI and human projection)


Concepts Covered: Anthropomorphism, ELIZA Effect, human psychological projection, ethical illusion, computational simulation versus genuine consciousness, limitations of artificial empathy, ethical implications of conversational AI.


Case Study: Critical analysis of contemporary conversational AI systems (ChatGPT, Replika, Siri) focusing explicitly on public perceptions, marketing strategies, and media narratives about AI’s supposed consciousness or ethical capabilities.


Activity: A Socratic seminar and interactive workshop in which students critically discuss and debate real-world user testimonials and media portrayals of conversational AI, systematically identifying instances of anthropomorphic projection and ethical illusion. Students collaboratively formulate strategies to mitigate misconceptions about AI consciousness and clearly communicate the ethical boundaries between human users and computational systems.


Module 2: Chaos, Order, and the Limits of Computationalism

In this module, students rigorously explore the relationship between chaos theory, computational determinism, and epistemic uncertainty, critically evaluating assumptions underlying contemporary predictions of technological progress—particularly Ray Kurzweil’s influential "Singularity hypothesis." Students closely examine the conceptual misnaming of "chaos theory" as disorder, recognizing instead its true nature as a form of highly structured yet unpredictably emergent order. Through careful philosophical inquiry, students analyze the epistemological implications of chaos theory and its direct challenge to computational determinism, as epitomized by Kurzweil’s vision of artificial intelligence transcending human cognition.


The module emphasizes Arkady Plotnitsky’s influential philosophical critiques drawn from quantum epistemology, complementarity, and undecidability. Students engage deeply with Plotnitsky’s analysis, recognizing how epistemic uncertainty fundamentally challenges deterministic views of reality and knowledge. By contrasting Plotnitsky’s sophisticated philosophical framework with Kurzweil’s computationalist assumptions—especially the flawed notion that human intelligence can be objectively measured and surpassed by artificial systems—students critically evaluate the conceptual and philosophical weaknesses underpinning the Singularity hypothesis.


Interactive class sessions include seminar-style discussions, Socratic dialogues, and structured debates centered on close readings from Plotnitsky and Kurzweil. Students critically dissect Kurzweil’s narrative of inevitable technological transcendence, applying philosophical concepts of complementarity, uncertainty, and undecidability to reveal fundamental epistemological flaws. Sessions also encourage students to articulate clear philosophical arguments about why complexity, unpredictability, and context-sensitivity inherent in human cognition and natural systems resist reduction to purely computational terms.


The module culminates in a structured critical debate and analytical assignment, where students systematically critique Kurzweil’s hypothesis using Plotnitsky’s conceptual tools. Students clearly demonstrate how undecidability and epistemic uncertainty undermine the deterministic claims central to computationalist and singularity narratives, emphasizing the philosophical and ethical importance of resisting technological determinism in favor of a nuanced recognition of human cognitive and ethical complexities.


Readings:

  • Ray Kurzweil, selected excerpts from The Singularity is Near

  • Arkady Plotnitsky, selected writings on complementarity, undecidability, and quantum epistemology

  • Excerpts from seminal works on chaos theory and complexity (e.g., Gleick’s Chaos: Making a New Science)


Concepts Covered: Chaos theory as structured order; computational determinism; complementarity; epistemic uncertainty; quantum epistemology; the philosophical limits of computationalism; critique of techno-utopianism.


Case Study: Critical analysis of contemporary technological predictions and singularity-oriented discourses (e.g., Neuralink’s claims, AGI hype cycles), assessed through Plotnitsky’s critique of determinism, complementarity, and undecidability.


Activity: Workshop and formal debate in student groups evaluating Kurzweil’s Singularity hypothesis against Plotnitsky’s philosophical frameworks. Groups articulate clear, rigorous arguments, demonstrating precisely how and why the deterministic, computational vision of AI progress misrepresents the epistemic complexity and inherent unpredictability of intelligence and human cognition.


Module 3: Algorithmic Determinism, Undecidability, and the Limits of AI “Thought”

In this module, students rigorously examine the fundamental differences between computational processes performed by Artificial Intelligence and genuine human thinking, particularly through the philosophical lenses of undecidability, meaning-making, and algorithmic determinism. Students engage deeply with Arkady Plotnitsky’s philosophical framework of undecidability and complementarity, highlighting why attempts to reduce human cognition to algorithmic computation inevitably fail to capture the complexities inherent in subjective human experience.


Through carefully selected philosophical texts—including excerpts from Plotnitsky’s writings on undecidability and Derrida’s concepts of différance and the undecidable—students critically assess AI’s reliance on computational predictability versus the inherent ambiguity and openness of human language and meaning-making. Special emphasis is placed on critiquing the common misconception that advanced Natural Language Models (NLPs), such as GPT and related technologies, genuinely “understand” or “think” in any meaningful human sense. Students explore how NLPs simulate thought through statistically-driven pattern recognition and text generation, yet remain fundamentally detached from the semantic depth, intentionality, and interpretive openness that characterize genuine human thought processes.

Interactive class sessions include rigorous debates, analytical group exercises, and close philosophical reading workshops. Students examine specific case studies of AI-driven language systems, analyzing the ELIZA Effect—human attribution of meaning, consciousness, or intentionality to computationally generated text—and its profound ethical implications for technology deployment and public perception. These sessions are designed to reinforce the distinction between algorithmic determinism and the undecidable nature of authentic meaning-making, highlighting the risks associated with confusing fluent computational mimicry with true understanding or ethical reasoning.


The module culminates in a comprehensive critical assignment requiring students to articulate, using concepts drawn from Plotnitsky, Derrida, and related philosophical frameworks, precisely why algorithmic systems cannot achieve genuine human thought, meaning-making, or ethical agency. Students will explicitly address the dangers of the societal tendency toward algorithmic determinism and propose ways to responsibly communicate the philosophical and ethical limitations of AI systems in both academic and public spheres.


Readings:

  • Arkady Plotnitsky, selected excerpts on undecidability and complementarity

  • Jacques Derrida, selected excerpts from Margins of Philosophy (especially "Différance")

  • Hubert Dreyfus, excerpts from What Computers Still Can't Do

  • John Searle, "Minds, Brains, and Programs" (Chinese Room Argument)


Concepts Covered: Algorithmic determinism, undecidability, différance, complementarity, semantic meaning-making, the ELIZA Effect, symbolic manipulation vs. human cognition.


Case Study: Analyze contemporary NLP systems (e.g., ChatGPT, virtual assistants) through philosophical critique, highlighting differences between computational mimicry and authentic understanding, focusing on ethical consequences arising from mistaking computational fluency for genuine thought.


Activity: Student teams critically evaluate public statements or marketing materials from AI companies or media coverage of NLP technology, identifying and deconstructing implicit assumptions of computational understanding, thought, and ethical agency. Teams will present their analyses, offering philosophical critiques and responsible communication strategies to address public misconceptions of AI capabilities.


Module 4: AI Ethics, Lacan's Four Discourses, and "The Authors of Silence"

In this module, students delve deeply into Jacques Lacan’s theory of the Four Discourses—Master, University, Hysteric, and Analyst—as presented in Seminar XVII: The Other Side of Psychoanalysis. Lacan’s conceptualization of these discourses provides a crucial analytic framework for examining AI as a symbolic apparatus embedded within cultural and social power dynamics. Students engage rigorously with the function of AI systems, not as autonomous agents possessing ethical subjectivity, but as complex sites where power, knowledge, desire, and subjectivity intersect and often clash.


To ground these complex ideas within contemporary cultural and philosophical discourse, this module incorporates a close reading and class discussion of my play-course, The Authors of Silence, as a case study in narrative exploration of undecidability, language, and symbolic power. This original dramatic text enables students to analyze how linguistic ambiguity and ethical undecidability manifest in human experience and literary narrative—highlighting precisely what algorithmic or computational systems fail to replicate or understand. Through "The Authors of Silence," students encounter the uniquely human dimensions of meaning-making, desire, subjective interpretation, and ethical ambiguity, contrasting sharply with the limited capacities of AI systems predicated solely on calculable outcomes.


Interactive class sessions include debates and reflective discussions examining specific scenes and dialogues from the play, focusing on the interplay of undecidability and human subjectivity, contrasted with algorithmic determinism. This integration enriches students' understanding of psychoanalytic theory, literary narrative, and the limitations of AI technology when confronting nuanced human ethical issues.


The concluding assignment requires students to articulate clearly, using conceptual tools developed throughout the module, how Lacan's Four Discourses illuminate both human and AI discursive interactions. Students must critically assess the implications of relying on algorithmically determined systems for complex human ethical decisions, demonstrating an informed appreciation for the psychoanalytic and philosophical dimensions of ethical undecidability.


  • Readings: Jacques Lacan’s seminar excerpts on the four discourses; selected psychoanalytic essays; excerpts from my play-course, "The Authors of Silence."

  • Concepts Covered: Master, University, Hysteric, and Analyst discourses; AI’s symbolic role in society and discourse; ethical implications of silence, repression, and symbolic structures.

  • Case Study: Analyze contemporary examples of AI (e.g., educational algorithms, surveillance systems) through the lens of Lacan’s four discourses and themes from "The Authors of Silence."

  • Activity: Student teams critically analyze how AI applications function within each Lacanian discourse structure, emphasizing ethical responsibilities of designers and implementers, while drawing connections to the symbolic role of silence and ethical decision-making explored in "The Authors of Silence."


Module 5: Surveillance, AI, and the Psychoanalytic Gaze

Module 5 of the course, titled “Surveillance, AI, and the Psychoanalytic Gaze,” explicitly integrates psychoanalytic theories, especially Jacques Lacan’s influential concept of the gaze as outlined in his seminal work, The Four Fundamental Concepts of Psychoanalysis. Students explore Lacan’s theory of the gaze not as mere perception but as an object that elicits desire, anxiety, and a sense of symbolic power. Michel Foucault’s concept of "Panopticism," presented in Discipline and Punish, further contextualizes the relationship between visibility, control, and power structures. Additionally, contemporary critiques of surveillance capitalism, notably Shoshana Zuboff’s extensive analysis in The Age of Surveillance Capitalism, provide a modern context for students to explore the ethical blindness perpetuated by AI-mediated surveillance systems. The module culminates in a practical threat-modeling exercise, where students identify and ethically critique the psychological impacts and societal implications of deploying advanced AI surveillance technologies in urban environments.


  • Core Texts: Michel Foucault’s “Panopticism” excerpt, Lacan on the gaze, contemporary critiques of surveillance capitalism (Zuboff).

  • Concepts: Ethical blindness, the gaze as symbolic power, AI as apparatus.

  • Activity: Threat-modeling exercise for urban surveillance technology; students identify and ethically critique the psychological impact of AI-mediated surveillance systems.


Final Project: Ethical AI Design and Human Responsibility

  • Objective: Students select an emerging AI technology (e.g., facial recognition, algorithmic hiring, generative AI in education), research technical aspects, and analyze societal impact through ethical and psychoanalytic lenses discussed in class.

  • Deliverables: Technical summary, ethical analysis report, and practical recommendations for ethically responsible design and deployment.


Teaching Methods and Pedagogical Approach

  • Philosophical Grounding: Using Socratic questioning, conceptual analysis, and philosophical rigor, students develop sophisticated critical thinking and ethical reasoning.

  • Interactive Learning: Labs, debates, and collaborative exercises ensure active learning and direct application of abstract theories.

  • STEM Integration: Aligning ethical topics directly with technical topics covered in computer science and engineering curricula to highlight real-world relevance.

  • Career Relevance: Explicit discussions on the role of ethics in STEM careers, emphasizing accountability, transparency, and ethical leadership in technological innovation.


Module 6: Beyond the Symbolic Order – Posthumanism, Practical Ethics, and Non-Deterministic AI

This module critically expands our analysis of AI ethics by engaging with posthumanism, practical ethical frameworks, and non-deterministic AI models, all of which contrast in important ways with the Lacanian-Derridean framework that has structured our discussions thus far. While our previous modules have explored how AI ethics operates within the discourse of the university and the phallogocentric drive for mastery, this module challenges some of those assumptions by considering alternative paradigms that do not center symbolic foreclosure, disavowal, or undecidability in the same way.


Posthumanism (drawing from Haraway, Braidotti, and Hayles) presents a framework that moves beyond the anthropocentric subject and explores how AI might be understood relationally rather than as an extension of human mastery. Practical ethics approaches (including virtue ethics, deontological ethics, and consequentialist models) shift the focus from ideology critique to normative ethical reasoning, raising the question of whether AI ethics can be proceduralized in a way that still accounts for complexity. Finally, non-deterministic AI models (such as stochastic machine learning systems, emergent AI behavior, and quantum AI) complicate the claim that AI is structurally bound to phallogocentric determinacy. Together, these three alternative frameworks—posthumanism, practical ethics, and non-deterministic AI—offer points of departure from the Lacanian-Derridean approach, forcing us to consider whether our critique of AI ethics needs to account for more heterogeneity in AI discourse.


Learning Objectives

  1. Differentiate posthumanist perspectives from Lacanian-Derridean critiques – Understand how posthumanism challenges anthropocentrism and explores AI beyond human desire and mastery.

  2. Examine practical ethical frameworks – Engage with applied ethical approaches (virtue ethics, deontology, consequentialism) and assess their relevance to AI governance.

  3. Analyze non-deterministic AI models – Investigate how probabilistic and emergent AI systems disrupt classical deterministic critiques of AI.

  4. Reassess the critique of AI discourse – Reflect on whether AI ethics can be theorized beyond disavowal, undecidability, and mastery.


Key Topics

1. Posthumanism and the Challenge to Phallogocentrism
  • Haraway’s cyborg feminism and the dissolution of rigid human-machine boundaries

  • Braidotti’s posthuman subjectivity: AI as an extension of relational becoming rather than mastery

  • Hayles and the materiality of information: How does AI challenge the idea of an autonomous human subject?

  • Posthuman ethics: What would it mean to consider AI as an ethical agent outside the Lacanian structure of desire?


2. Practical Ethics and the Question of AI Governance
  • Virtue ethics and AI: Can AI be designed to promote human flourishing?

  • Deontological ethics: Is it possible to encode duty-based ethical principles into AI?

  • Consequentialism and AI: Can AI ethics be reduced to cost-benefit calculations?

  • The limits of regulatory AI ethics: Does proceduralizing ethics inevitably lead to foreclosure, or can it incorporate ambiguity?


3. Non-Deterministic AI and Emergent Complexity
  • The challenge of stochastic AI models: Do probabilistic AI systems defy the deterministic logic of mastery?

  • Reinforcement learning and adaptation: Can AI systems develop agency-like unpredictability?

  • Quantum AI and computational undecidability: Could quantum computing introduce genuine unpredictability into AI?

  • Emergent AI behavior: Does AI’s capacity to self-modify complicate the critique of AI as structurally bound to determinacy?


4. Reassessing the Lacanian-Derridean Critique of AI
  • If AI is not strictly deterministic, does it still disavow undecidability?

  • Does posthumanism offer a way to conceptualize AI ethics that escapes the critique of mastery?

  • Can practical ethics provide meaningful regulatory frameworks without reinforcing phallogocentric foreclosure?

  • Should AI ethics be thought of as a hybrid space where multiple theoretical perspectives intersect?


Required Readings

  1. Donna Haraway, "A Cyborg Manifesto"

  2. Rosi Braidotti, The Posthuman

  3. N. Katherine Hayles, How We Became Posthuman

  4. Shannon Vallor, Technology and the Virtues

  5. Bernard Stiegler, "The Age of Automation and the Future of the Human"

  6. Selected sections from "Enabling Cyborg Repair"


Seminar Discussion Questions

  1. How does posthumanism challenge the Lacanian-Derridean critique of AI ethics?

  2. Can practical ethics offer an alternative to ideological critique in AI governance?

  3. If AI is not strictly deterministic, does this weaken the critique that AI enacts phallogocentric control?

  4. What are the strengths and weaknesses of non-deterministic AI models in resisting mastery?

  5. How does the cyborg model in "Enabling Cyborg Repair" offer a counterpoint to both humanist and phallogocentric critiques of AI?


Assignments

Essay Prompt: “Compare and contrast the Lacanian-Derridean critique of AI ethics with a posthumanist or practical ethics perspective. Which framework offers a more effective way to conceptualize AI ethics?”

Project: “Develop a speculative AI ethics model that integrates insights from posthumanism, practical ethics, or non-deterministic AI. Present your findings as a theoretical paper, interactive model, or conceptual design proposal.”


Course Outcomes

Students completing this course will:

  • Recognize and critique deterministic assumptions underlying AI narratives (e.g., the singularity).

  • Understand undecidability and complementarity as central ethical challenges for AI development and deployment.

  • Apply rigorous psychoanalytic and philosophical critiques to practical ethical problems in AI technology.

  • Develop the ability to articulate and justify ethical positions clearly and persuasively, both in technical and interdisciplinary contexts.


Conclusion

In conclusion, this course equips STEM students at Oregon State University with a robust philosophical and ethical toolkit, enabling them to thoughtfully, critically, and responsibly navigate complex moral dilemmas inherent in the technological innovations they will encounter in their professional careers. Students will not only master the conceptual tools necessary to discern the profound difference between genuine human thought and computational mimicry but will also appreciate the essential role of human judgment and accountability in the ethical governance of AI systems. Recognizing and respecting the inherent undecidability of meaning, ethics, and human subjectivity, students will learn to engage with technology ethically, challenging deterministic and reductionist narratives surrounding AI. Ultimately, graduates of this course will be well-positioned to emerge as ethical leaders who can confidently address the nuanced and interdisciplinary challenges presented by AI technologies, ensuring responsible innovation that centers on human values and societal well-being.

 
 
 

Comments


The

Undecidable

Unconscious

Contact us

bottom of page