U11 - AI, Ethics, Undecidability, and the Limits of Analytic Philosophy: Working Through the Symptoms
- Eric Anders
- Mar 19
- 29 min read
Updated: Mar 20
Course Level: Upper-division undergraduate (no prerequisites required)
Institution: Oregon State University, School of History, Philosophy, and Religion
Term: Fall 2025 (11-week quarter)
Instructor: Eric W. Anders, Ph.D., Psy.D.
Class Times: [Days] [Time] at [Location]
Course Overview
AI Ethics, Chaos Theory, Natural Language Models, and the Denial of Undecidability
The deliberately provocative title of this course serves as a critical starting point—highlighting precisely the symptomatic nature of how we name and understand complex phenomena within technology, language, and ethics. From a psychoanalytic perspective, particularly drawing upon Freud and Lacan, "working through" (Durcharbeitung) involves rigorously and repeatedly engaging with a symptom or symbolic conflict, moving beyond mere intellectual insight toward a deeper subjective transformation. This transformative process unfolds within the analytic transference, where unconscious structures sustaining the subject’s suffering become manifest and can thus be confronted and reconfigured. Lacan describes this critical analytic journey as "traversing the fantasy," dismantling the misrecognitions that shape the subject’s desire rather than merely replacing neurosis with conscious resolution. Freud famously encapsulated this goal with the phrase "Wo Es war, soll Ich werden" ("Where id was, there ego shall be"), not as an elimination of the unconscious but as a profound restructuring of the subject’s relation to it. Thus, health in psychoanalysis is never a simplistic absence of symptoms, but rather a continuous and active confrontation with the symptomatic formations of the unconscious—formations that inherently resist neat categorization, easy resolution, and thus always return.
In this course, we approach the misnomers embedded explicitly in its title—"AI Ethics," implying moral agency within computational systems rather than recognizing human ethical responsibility; "Chaos Theory," misunderstood as mere disorder rather than a theory describing a form of complex, emergent order (and crucial to understanding AI and how it works); and "Natural Language Models," erroneously perceived as genuinely natural rather than inherently algorithmic, so inherently unnatural—as symptomatic manifestations revealing deeper unconscious desires and cultural anxieties. Specifically, the terms "AI Ethics" and "Natural Language Models" embody what Joseph Weizenbaum identified as the ELIZA Effect: our persistent psychological tendency to project meaning, intentionality, and consciousness onto computational systems that merely simulate these qualities. In contrast, the misnaming of "Chaos Theory" signals a symptomatic denial of undecidability, reflecting a broader cultural fantasy of deterministic predictability and stable order within inherently complex and unpredictable systems. Collectively, these misnomers serve as symptoms pointing to deeper unconscious desires to deny or disavow complexity and undecidability in human subjectivity, ethics, and meaning-making.
Understanding the ethical dimensions of artificial intelligence thus demands an understanding of how these technologies fundamentally operate. Here, chaos theory and complexity science become critical conceptual frameworks for unraveling the intricate workings of advanced AI systems, particularly sophisticated Natural Language Models. Chaos theory examines how deterministic yet nonlinear systems can produce outcomes that are simultaneously unpredictable yet structured, offering insights into the inherently chaotic and emergent dynamics of AI systems. Although theoretically governed by algorithmic logic, AI models regularly exhibit unexpected behaviors due to intricate interactions among their extensive parameters, data inputs, and computational processes. This recognition disrupts conventional notions of AI as fully controllable or entirely predictable, urging a profound shift in how ethical frameworks are developed around technology—towards greater acknowledgment of uncertainty, complexity, and the inherent limits of computational determinism.
Additionally, chaos theory highlights AI systems' acute sensitivity to initial conditions, training datasets, and even minor algorithmic modifications, reinforcing critical ethical responsibilities around data curation, transparency, interpretability, and accountability. Even seemingly small biases or inaccuracies in datasets or algorithms can lead to disproportionate and ethically consequential outcomes—analogous to a butterfly effect within technological domains. Such potential for cascading consequences necessitates meticulous ethical oversight and continuous vigilance in both design and deployment stages.
Moreover, complexity science underscores the interaction between complex AI systems and equally complex social systems, emphasizing the potential for emergent ethical dilemmas, unintended discrimination, misinformation cascades, and societal polarization. Ethical frameworks must therefore expand beyond isolated technical considerations to incorporate awareness of broader societal and systemic interactions, understanding how localized technological decisions can unpredictably reverberate globally and culturally.
Yet, a crucial dilemma arises here, frequently overlooked by contemporary "AI Ethics" and broader discourses on the ethics of emerging technologies: While AI should never independently assume moral decision-making roles, the reality of unpredictability inherent in chaotic and complex systems means we must also acknowledge the necessity of AI possessing mechanisms to self-regulate to a certain degree. Human ethical oversight alone will not suffice due to the speed, complexity, and emergent properties of AI behavior, particularly as it scales and interacts with similarly unpredictable systems. Thus, a profound ethical paradox emerges—AI systems must be endowed with some level of automated ethical checks or "self-policing" capabilities to manage their unpredictable dynamics effectively, yet simultaneously, we must never fully abdicate human ethical accountability or genuinely delegate moral agency to AI.
Throughout the course’s rigorous process of "working through," students engage deeply with philosophical, psychoanalytic, and ethical theories—including the works of Freud, Lacan, Derrida, Plotnitsky, and Weizenbaum—to develop critical capacities necessary for recognizing and resisting simplistic narratives of technological inevitability and reductive conceptions of human thought and subjectivity. The course foregrounds the genuine nature of psychological and ethical health: an informed and mature acceptance of undecidability as intrinsic to human existence, a recognition of the irreducible complexity inherent in meaning-making, and a thoughtful assumption of responsibility regarding ethical technology deployment and governance.
Ultimately, this course strategically bridges technical mastery and philosophical depth, systematically confronting fundamental philosophical questions about authentic meaning-making versus computational mimicry. Through critical examination of assumptions underlying AI discourse—employing Socratic questioning, conceptual analysis, and interdisciplinary critiques—students become equipped not only to navigate ethical challenges associated with technological innovation but also to recognize unequivocally that ethical responsibility resides solely with human actors who design, deploy, and regulate AI systems, never within the AI itself.
Structured into interconnected modules, the course actively fosters critical thinking, ethical reasoning, and conceptual clarity, preparing students comprehensively to guide AI’s ethical development and integration into society with nuanced, informed, and ethically responsible perspectives.
Course Description
Condensed and Accessible Version
This course begins with a deliberately provocative title to highlight how we misname and misunderstand complex phenomena in technology, language, and ethics. Psychoanalysis, particularly Freud and Lacan, teaches that "working through" (Durcharbeitung) involves confronting and re-engaging with symptoms beyond mere intellectual insight. True transformation happens by facing the unconscious conflicts that structure desire and thought. In Lacanian terms, this means “traversing the fantasy”—dismantling comforting misrecognitions rather than simply replacing neurosis with a rational explanation. As Freud put it, “Where id was, there ego shall be”—not to erase the unconscious, but to change our relationship with it. Psychological health, then, is not the absence of symptoms but the ability to navigate the uncertainties and contradictions of the unconscious.
This course applies that psychoanalytic perspective to contemporary discourse around AI, chaos theory, and natural language models. Each term in the course title—"AI Ethics," "Chaos Theory," and "Natural Language Models"—contains symptomatic misrepresentations or misnomers:
"AI Ethics" suggests that artificial intelligence can possess moral agency, when in reality, ethical responsibility lies entirely with the humans who design, deploy, and regulate these systems. AI does not think, understand, or make ethical judgments—it follows programmed algorithms and statistical patterns. The real ethical questions arise from human decisions about how AI is created, used, and integrated into society, highlighting the need for accountability in its development and application.
"Chaos Theory" is often mistaken for randomness when it actually explains how complex patterns emerge from seemingly chaotic systems. It shows that small changes in initial conditions can lead to significant but structured outcomes, known as the butterfly effect. Rather than pure disorder, chaos theory reveals nonlinear dynamics, where patterns exist but remain unpredictable. This applies to fields like weather, economics, and even the unconscious, highlighting how order can arise from apparent chaos..
"Natural Language Models" are misleadingly named, as they neither develop language naturally nor truly understand it. They generate text by predicting word patterns based on vast datasets, but lack meaning, intention, or comprehension. Despite this, they are often treated as if they genuinely "understand" and "communicate," leading to misconceptions about their abilities and limitations.
These misnomers function as symptoms of deeper unconscious desires—often unconscious attempts to deny complexity and undecidability in favor of stable, deterministic explanations.

The ELIZA Effect, identified by Joseph Weizenbaum, describes our tendency to attribute meaning, consciousness, and agency to AI systems that merely simulate understanding. This illusion underpins both AI Ethics and Natural Language Models, reinforcing the fantasy that computation can replace human ethical reasoning and linguistic meaning-making. AI Ethics falsely implies that AI can engage in moral deliberation, masking the fact that ethical responsibility remains entirely human. Likewise, Natural Language Models, despite their fluency, operate through statistical patterning without comprehension or intentionality, yet they are often treated as genuine thinking entities. In both cases, the ELIZA Effect sustains a broader cultural fantasy that computation can fully model ethics, thought, and language, obscuring the fundamental undecidability at the heart of human subjectivity and meaning.
Similarly, the misunderstanding of chaos theory reflects a cultural fantasy of deterministic mastery—the belief that complexity can always be reduced to predictable models. Yet, as Plotnitsky argues, chaos theory remains within the classical framework, maintaining determinism rather than embracing the undecidability revealed in quantum mechanics, particularly through complementarity. While it acknowledges sensitivity to initial conditions, it still assumes underlying patterns can be mapped. In contrast, quantum mechanics exposes a deeper, fundamental indeterminacy, where reality itself resists full articulation. Rather than revealing undecidability, chaos theory ultimately reinscribes determinism onto systems that are genuinely indeterminate.
This course critically examines these symptomatic illusions—fantasies often disguised as scientific truths—and challenges students to "work through" them, much like unconscious conflicts in psychoanalysis. Through this process, students confront the underlying desires and misrecognitions that sustain these illusions, fostering a deeper, more rigorous engagement with the complexities of AI, ethics, and meaning.

Without requiring deep theoretical engagement, this course introduces students to key insights from Freud, Lacan, Derrida, Plotnitsky, and Weizenbaum to help them critically examine the illusions of computational determinism and the ethical consequences of denying undecidability. Students will learn to distinguish true meaning-making from computational mimicry, recognizing that while AI can generate fluent language, it lacks intentionality, self-awareness, and the ability to interpret meaning within lived experience. Misunderstanding this distinction leads to serious consequences, such as falsely attributing moral responsibility to AI in surveillance, autonomous decision-making, and governance—ultimately masking human accountability behind the illusion of technological objectivity.
Students will explore whether AI can truly engage in ethical reasoning or if the very phrase "AI ethics" shifts responsibility away from human actors. The course is structured around interconnected modules designed to develop critical thinking and ethical clarity, ensuring that students emerge prepared to navigate the ethical challenges posed by AI and computational systems.
Ultimately, the course cultivates a mature acceptance of undecidability as fundamental to human experience and an ethical awareness that responsibility for technology lies with humans, not machines. By working through the symptomatic misrecognitions in AI discourse, students will learn not just to critique, but to act responsibly in the ethical governance of emerging technologies.
Course Objectives
By the conclusion of this course, students will be able to:
Explain core concepts of AI and human intelligence: Define what artificial intelligence is and distinguish it from human cognitive abilities (Ethics of Artificial Intelligence | Internet Encyclopedia of Philosophy). Discuss how AI systems (like machine learning models) process information versus how humans learn and use language.
Understand language and meaning in AI: Describe how natural language models (e.g. chatbots) work at a high level and articulate the difference between syntactic processing and true semantic understanding (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes). Reflect on how human language acquisition differs from AI language modeling.
Apply ethics frameworks to AI scenarios: Identify major ethical frameworks – utilitarianism, deontology, virtue ethics, etc. – and apply them to real-world AI design dilemmas. For example, evaluate an autonomous car’s decision-making through utilitarian vs. deontological lenses (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes) and consider the role of virtues (like transparency or responsibility) in AI development (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes).
Analyze and address AI ethical challenges: Examine case studies of ethical issues in AI (such as biased facial recognition or privacy in surveillance) and propose solutions or design improvements. Students will learn to recognize bias in data and algorithms (e.g. how facial recognition can exhibit racial/gender bias (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes)) and suggest mitigation strategies.
Explain chaos theory relevance to AI: Summarize key ideas of chaos theory (e.g. the “butterfly effect”) and how small changes in initial conditions can lead to unpredictable outcomes (File:Chaos Theory & Double Pendulum - 1.jpg - Wikimedia Commons). Draw connections between chaotic behavior in complex systems and the behavior of AI models – for instance, how slight tweaks in training data can yield wildly different results, impacting fairness ([2307.05842] The Butterfly Effect in Artificial Intelligence Systems: Implications for AI Bias and Fairness).
Critically evaluate the Singularity hypothesis: Describe the Technological Singularity concept (exponential AI growth leading to superintelligence) and its popular appeal. Then systematically critique it using logical and scientific arguments: e.g. identifying flawed assumptions about exponential progress, the “complexity brake” that slows breakthroughs (Paul Allen: The Singularity Isn’t Near | MIT Technology Review), and physical limits on computation (Technological singularity - Wikipedia).
Demonstrate ethical reasoning in practice: Through projects and discussions, develop the ability to form well-reasoned arguments about what responsible AI entails. This includes balancing stakeholder interests, anticipating unintended consequences, and using ethical frameworks to justify design choices in AI systems.
Collaborate and communicate effectively: Work in teams to analyze ethical case studies and present coherent arguments. Improve written and oral communication skills by authoring structured essays and delivering presentations on AI ethics topics.
Weekly Schedule and Topics
Note: The schedule below is subject to minor adjustments. Readings should be completed before each week’s first class meeting to enable informed discussion. “Assignments/Activities” include what is due or happening that week. All readings will be provided via links or PDF on the course site (no paid textbook required).
Week 1: Introduction to AI and Ethical Thinking
Topics: What is Artificial Intelligence (AI)? Overview of AI types (rule-based vs. learning systems). Defining “intelligence” in machines vs. humans. Why ethics matters in AI – introduction to key ethical issues in technology (privacy, bias, accountability). Survey of student perspectives on AI in society.
Readings:
Brynjolfsson & McAfee, “The Job That AI Can’t Steal” – (MIT Technology Review article introducing AI capabilities and limits in plain terms).
Internet Encyclopedia of Philosophy: “Ethics of Artificial Intelligence” (Section I: “What is AI and its Ethical Relevance”) – definition of AI and overview of short-term vs long-term AI issues (Ethics of Artificial Intelligence | Internet Encyclopedia of Philosophy) (Ethics of Artificial Intelligence | Internet Encyclopedia of Philosophy).
Optional: John McCarthy, “What is AI?” – (short webpage by AI pioneer defining AI).
Assignments/Activities:
In-class: Discussion of everyday AI examples (virtual assistants, recommendation algorithms) – what “intelligence” do they exhibit? Small-group brainstorm: ethical dilemmas students have heard of in AI.
Homework (due Week 2): Reflection Essay – “In your own words, define artificial intelligence and describe one ethical concern you have about it.” (1-2 pages, informal). This ungraded diagnostic helps gauge writing and baseline understanding.
Week 2: Human Intelligence vs. Machine Intelligence
Topics: Comparison of human cognition and AI computation. How do brains learn and perceive the world versus how algorithms do? Introduce basic neuroscience vs. computer architecture (at a conceptual level). The Turing Test and what it means for a machine to “think.” Discussion of consciousness – do we need it for intelligence? Case: IBM Watson and Deep Blue as early milestones – did they think or just brute-force compute?
Readings:
Alan Turing (1950), “Computing Machinery and Intelligence” (excerpts) – the Imitation Game proposal.
Steven Pinker, “Mind over Meat” – (essay on differences between brains and computers, written for a general audience).
Optional: Pamela McCorduck, Machines Who Think (Chapter 1 historical overview).
Assignments/Activities:
In-class: Debate: “Can machines think, or only simulate thinking?” We will stage a mini-debate using Turing’s ideas – one side advocating Turing’s view that if behavior is indistinguishable, it’s thinking; the other side skeptical (introducing Searle’s Chinese Room scenario, which we explore next week).
Homework: Begin forming Project Teams (3-4 students each) and brainstorming possible topics for the final Ethics in AI Project (due in Week 10). A short team proposal is due in Week 4.
Week 3: Language, Understanding, and AI (Natural Language Models)
Topics: The centrality of language in human intelligence. How humans acquire language (briefly) vs. how AI processes language. Natural Language Processing (NLP) and introduction to language models from early chatbots (ELIZA) to modern large language models (GPT-3/4). Do AI language models understand meaning or just manipulate symbols? We explore Searle’s Chinese Room thought experiment, which argues that syntax is not semantics (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes). Implications for AI: can an AI ever truly understand or is it always a “stochastic parrot”?
Readings:
John Searle (1980), “Minds, Brains, and Programs” (excerpt on the Chinese Room argument) – highlights difference between processing symbols and understanding them (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes).
Emily Bender et al. (2021), “On the Dangers of Stochastic Parrots” (Introduction only) – outlines concerns with large language models and the illusion of understanding.
Medium blog post: “How GPT-3 Works – an Explanation for Non-Techies” – accessible breakdown of a large language model’s functioning.
Assignments/Activities:
In-class: Chinese Room role-play – one student simulates the “room” following lookup rules to answer Chinese questions (without knowing Chinese). Class discusses: does the “room” understand Chinese? Connect to how Siri or Alexa might operate.
Lab exercise: Experiment with a public language model (e.g. OpenAI’s ChatGPT or an open-source alternative) using a provided prompt list. Observe where it succeeds and fails at “understanding.”
Homework (due Week 4): Short response (2 pages): Describe a situation where an AI chatbot’s lack of true understanding could lead to a miscommunication or ethical problem (e.g. giving harmful advice because it doesn’t understand context or morality). Reference Searle’s argument in your answer.
Week 4: Chaos Theory and Complexity in Intelligent Systems
Topics: Introduction to chaos theory – deterministic systems can exhibit unpredictable (chaotic) behavior. Key concept: sensitive dependence on initial conditions (the “butterfly effect”) (File:Chaos Theory & Double Pendulum - 1.jpg - Wikimedia Commons). Examples: weather systems (Lorenz’s model), double pendulum, and ecological populations. We then draw analogies to AI: complex AI systems (like deep neural networks) can be seen as high-dimensional systems sensitive to small changes. A tiny tweak in training data or parameters might cause a large change in outcomes – raising challenges for testing and fairness ([2307.05842] The Butterfly Effect in Artificial Intelligence Systems: Implications for AI Bias and Fairness). We also discuss emergent behavior: how intelligence might arise at the “edge of chaos” (complex enough to learn, structured enough not to be random).
Readings:
James Gleick, “Chaos: Making a New Science” (Chapter 1 excerpt) – introduces the butterfly effect in an accessible narrative.
Emilio Ferrara (2024), “The Butterfly Effect in AI Systems: Implications for Bias and Fairness” (Abstract) – explains how minor biases or changes in data can lead to disproportionately large effects in AI outcomes ([2307.05842] The Butterfly Effect in Artificial Intelligence Systems: Implications for AI Bias and Fairness).
Optional: Melanie Mitchell, “Complexity: A Guided Tour” (Chapter on Complex Adaptive Systems) – sections on how complexity and chaos relate to biology and possibly AI.
Assignments/Activities:
In-class demonstration: Double Pendulum Chaos – the instructor will demonstrate a double pendulum or show a video of it. We’ll discuss why each run diverges even if started almost the same way, illustrating chaos visually.
(File:Chaos Theory & Double Pendulum - 1.jpg - Wikimedia Commons) Figure: Long-exposure image of a double pendulum’s motion, illustrating sensitive dependence on initial conditions (each light trail is unique even with tiny differences in start) (File:Chaos Theory & Double Pendulum - 1.jpg - Wikimedia Commons). In class, we connect this “butterfly effect” to AI: for instance, a slight change in training input might dramatically alter a neural network’s behavior, an important consideration for AI safety and fairness.
In-class: Small-group analysis – each group gets a scenario of an AI system behaving unpredictably (e.g. an image classifier that gives bizarre errors when an image is slightly modified, or a stock trading algorithm that spiraled out of control). Groups identify possible “chaos” factors at play and share how they would mitigate them (e.g. better testing or constraints).
Project Milestone: Teams submit a Project Proposal (1 page) outlining their chosen AI ethics case or application to analyze, why it’s important, and a plan for research. (Instructor will give feedback in Week 5.)
Week 5: Foundations of AI Ethics – Frameworks and Principles
Topics: Building an ethical toolkit. We cover major ethical theories in a crash-course fashion and apply them to technology:
Utilitarianism: Greatest good for the greatest number – how to measure “good” in AI outcomes (e.g. programming a self-driving car to minimize total harm in an accident scenario) (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes).
Deontological Ethics: Duty and rules – e.g. AI in healthcare following strict privacy rules or an absolute no-harm rule, regardless of outcomes (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes).
Virtue Ethics: Focus on moral character – what virtues should AI developers and organizations cultivate (honesty, accountability, empathy)? (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes) How might an AI be designed to reflect virtuous values (if at all possible)?
Ethics of Care and others: Brief mention of alternative frameworks (e.g. feminist or care ethics focusing on relationships, or justice-oriented approaches focusing on rights and fairness).
We also introduce professional ethics codes (ACM/IEEE Code of Ethics for AI/Computing) and global principles. For example, the EU’s Guidelines for Trustworthy AI emphasize four key principles – human autonomy, prevention of harm, fairness, explicability – leading to requirements like transparency and accountability ( Concepts of Ethics and Their Application to AI - PMC ). These principles help bridge abstract ethics to concrete design guidelines.
Readings:
Brian Christian & Tom Griffiths, “An Ethical Algorithm?” – (short piece on attempts to encode ethics in algorithms).
Clifford D. Simonsen, “Three Frameworks for AI Ethics” – (overview of utilitarianism, deontology, virtue ethics applied to AI, with examples) (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes) (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes).
EU High-Level Expert Group on AI (2019), “Ethics Guidelines for Trustworthy AI” (pg. 1-13) – read the sections introducing the 4 principles and 7 requirements ( Concepts of Ethics and Their Application to AI - PMC ).
Optional: ACM Code of Ethics (2018) – skim sections relevant to AI (1.1, 1.2, 1.4, 2.5 on harm, fairness, transparency, etc.).
Assignments/Activities:
In-class: Case mini-analysis – “Trolley Problem for Self-Driving Cars.” We pose a simplified scenario: a car must choose between two bad outcomes. Students individually decide what they think is ethical, then identify which framework their choice aligns with (utilitarian trade-off or deontological rule). We then discuss as a group how different frameworks lead to different programming choices for the car – highlighting there’s no easy answer.
In-class: Interactive lecture on the frameworks with quick check questions (e.g., Is “the ends justify the means” a utilitarian or deontological idea?).
Assignment (due Week 6): Essay 1 – Ethical Frameworks in Action: Analyze a given scenario (assigned by instructor, e.g. Autonomous Drone Surveillance or AI Hiring Tool with Bias) from two perspectives: first as a utilitarian, then as a deontologist. What decision or policy would each approach recommend, and why? Finally, state your own view and which ethical approach (or combination) you find most appropriate. (~4 pages, structured argument expected). This is a structured essay – it should have an introduction, analysis sections for each framework, and a reasoned conclusion.
Week 6: Case Studies in AI Ethics – Bias, Fairness, and Accountability
Topics: We dive into concrete case studies focusing on bias and fairness in AI. Potential cases:
Case 1: Biased Facial Recognition: Several facial recognition systems have been found to misidentify people of color and women at much higher rates (The Ethics of Artificial Intelligence Philosophical Perspectives on Machine Morality (pdf) - CliffsNotes). We examine a real study (e.g. Joy Buolamwini’s “Gender Shades” project) and discuss sources of bias (training data not diverse, historical biases in labels) and consequences (wrongful arrests, discrimination). We apply ethics frameworks: Is it ethical to deploy such systems? How to improve them (technical fixes, policy/regulation)?
Case 2: Algorithmic Decision-Making in Criminal Justice (COMPAS): COMPAS risk scores (used to predict recidivism) were accused of racial bias. We analyze how fairness can be defined (equal false positive rates? equal accuracy?) and what this means ethically (justice, due process).
Accountability and the “Black Box” Problem: Many AI models (like deep learning) are not easily interpretable. Case: an AI system in healthcare made a recommendation that caused harm – who is responsible, the doctor or the algorithm designer? We discuss the importance of explicability (as per EU guidelines) and possible approaches like explainable AI, auditing, and regulation for accountability.
Readings:
ProPublica investigation: “Machine Bias” – article on COMPAS risk scores and racial bias.
Joy Buolamwini & Timnit Gebru (2018), “Gender Shades” (Executive summary) – on bias in facial recognition.
IEEE Spectrum: “Why AI is Hard to Explain” – short piece on the black-box nature of deep learning and why it’s problematic for accountability.
Optional: Cathy O’Neil, “Weapons of Math Destruction” (Ch. 1) – examples of algorithms with big impacts and no transparency.
Assignments/Activities:
In-class: Jigsaw discussion – Students are assigned one of the two main cases (Facial Recognition or COMPAS) to read in depth. In class, first meet in “expert groups” (all who read the same case) to summarize key points and ethical issues. Then re-form into mixed groups where each student teaches their case to the others. Together, the mixed group compares the two cases: Are the issues of bias similar or different? What common ethical principles emerge (e.g. fairness, justice, non-maleficence)?
In-class: Brainstorm solutions – for each case, what are possible fixes? Consider technical (improve data, algorithms), policy (regulate use, require bias audits), and ethical design (involve affected communities in design, etc.).
Assignment: Teams work on Project Research – by this week, teams should be gathering sources for their final project case study. In place of an exam, a Project Progress Report (2 pages per team, due start of Week 7) will detail: the case or system being studied, key ethical issues identified so far, and at least 3 sources per team member. This ensures projects are on track.
Week 7: AI in the Wild – Autonomous Systems and Ethical Dilemmas
Topics: Examination of AI operating in high-stakes domains and the ethical questions that arise. We look at autonomous vehicles and weapons, and AI in healthcare/medicine as representative domains:
Autonomous Vehicles: Beyond the trolley problem – real dilemmas like how to balance passenger safety vs. pedestrian safety, whether to follow traffic law vs. potentially save lives (e.g. breaking the law to avoid an accident). Also, liability: if an autonomous car causes harm, who is responsible (manufacturer, software developer, owner)? Discussion of recent incidents (e.g. Uber self-driving car fatality) and how ethical design or regulation might have prevented them.
Autonomous Weapons (LAWs): The ethics of AI in warfare – should machines be allowed to make lethal decisions? We outline arguments for and against “killer robots,” including the loss of human judgment/compassion vs. claims of reduced casualties. International efforts (or lack thereof) to regulate AI weapons are noted. This highlights a deontological viewpoint (some argue we have a duty never to delegate killing to machines) vs utilitarian (perhaps fewer soldiers die if robots fight).
AI in Healthcare: AI diagnosis and treatment recommendations (e.g. IBM Watson for Oncology), and the tension between algorithmic advice and doctor expertise. Ethics of trust, transparency, and error – if an AI misdiagnoses, how do we ensure patient safety? Also, privacy issues with AI analyzing medical data.
Readings:
Selected Case Study: “Fatal Crash with Self-Driving Car” (short incident report + commentary).
Peter Asaro, “On Banning Autonomous Lethal Systems” – an argument against autonomous weapons (read intro and conclusion).
Nature Medicine editorial: “The Doctor and the Algorithm” – discusses the need for transparency and validation of AI in healthcare.
Optional: IEEE Ethics in AI Toolkit – section on autonomous systems.
Assignments/Activities:
In-class: Case role-play: The class splits into stakeholder groups for the autonomous car case (engineers, car company execs, victims’ families, regulators). Each group gets 10 minutes to formulate their perspective on what went wrong and what should be done (e.g. stricter testing, new laws, ethical training for engineers). They then share, and we discuss the interplay of technology and ethics in policy.
In-class discussion: Autonomous weapons – students vote on a spectrum from “ban completely” to “allow with regulation” to “no restriction” and explain their positions referencing ethical theories (e.g. a student might say “As a rule (deontologically), I believe killing must always have a human in the loop”). We also consider real-world practicality and what is happening internationally.
Project Work: In the latter part of the week, teams will workshop their projects in class – each team presents a 5-minute lightning update on their case and a key ethical challenge they’re focusing on, to get peer and instructor feedback.
Week 8: The “Singularity” – Utopian Dream or Myth? (Part 1)
Topics: Introduction to the Technological Singularity concept. We explain what futurists like Vernor Vinge and Ray Kurzweil have predicted: a point where AI improves itself exponentially and surpasses human intelligence, causing unfathomable changes (Paul Allen: The Singularity Isn’t Near | MIT Technology Review). Kurzweil famously predicts this by year 2045 (Technological singularity - Wikipedia), based on trends like Moore’s Law and his “Law of Accelerating Returns.” We unpack the reasoning: exponential growth of computing power (Kurzweil’s chart of computations per $1,000 reaching human-brain levels), the idea of recursive self-improvement (an AI that makes an even smarter AI, and so on). We also discuss what the Singularity is supposed to bring – from Kurzweil’s optimistic vision of humans merging with AI and living forever (Paul Allen: The Singularity Isn’t Near | MIT Technology Review), to sci-fi tropes of AI takeovers. Importantly, we treat these as hypotheses, not fact.
Readings:
Ray Kurzweil, “The Singularity is Near” (Chapter 1 excerpt) – Kurzweil’s description of exponential growth and timeline (read to understand his argument in his own words).
Vernor Vinge (1993), “The Coming Technological Singularity” – original essay that coined the term (skim for main idea).
Slate article: “AI Apocalypse? Experts Weigh In” – a compilation of short quotes from AI researchers about the plausibility of a Singularity or superintelligence scenario.
Assignments/Activities:
In-class lecture/discussion: We critically examine the assumptions behind the Singularity. The instructor will walk through Kurzweil’s exponential computing graph and ask: does hardware equal intelligence? What about software and the complexity of the problem? We introduce skepticism here but will delve deeper in Part 2.
(File:PPTExponentialGrowthof Computing.jpg - Wikimedia Commons) Ray Kurzweil’s chart illustrating the exponential growth of computing power (vertical axis on a log scale) relative to brain complexity levels – from insect brain, to mouse, to human, to “all human brains” combined. He uses such data to argue that by ~2045, $1000 of computing will exceed human brain power, leading to superintelligent AI (Technological singularity - Wikipedia). In class, we scrutinize this claim: Does exponential hardware growth guarantee an AI that can redesign itself endlessly?
In-class activity: Singularity Fishbowl – Half the class plays the role of Singularity believers (using Kurzweil/Vinge readings to support their stance that it’s coming soon and will be transformative), the other half are skeptics (armed with questions/concerns). Believers sit in an inner circle and discuss their views for 5 minutes while skeptics listen; then skeptics get 5 minutes in the fishbowl to discuss why the Singularity might be a myth or at least not anytime soon. Then open discussion follows. This sets the stage for next week’s deeper debunking.
Homework: Reflection Journal: Given what you’ve learned about the Singularity idea, write a 1-page reflection: Are you convinced by Kurzweil’s argument? Why or why not? Identify one assumption you find questionable and one potential consequence of the Singularity that concerns or intrigues you.
Week 9: Debunking the Singularity – Critical Perspectives (Part 2)
Topics: This week is devoted to systematically dismantling (or at least critically analyzing) the Singularity hypothesis. We present several lines of critique:
The “Complexity Brake”: As proposed by Paul Allen (Microsoft co-founder) and Mark Greaves (Paul Allen: The Singularity Isn’t Near | MIT Technology Review) (Paul Allen: The Singularity Isn’t Near | MIT Technology Review), progress in understanding and emulating human intelligence might slow dramatically due to the sheer complexity of the brain and cognition. Each new level of detail we discover (neurons, synapses, neurotransmitters, genetic factors, etc.) adds complexity rather than reducing it, potentially braking the rapid progress Kurzweil assumes. We discuss Allen’s argument that achieving a full human-level AI is not just an engineering problem but a deep scientific one that could take decades or centuries – thus “the singularity isn’t near.”
Physical Limits and Diminishing Returns: Citing technologists like Jeff Hawkins, we consider hardware limits. Hawkins points out that even an AI improving itself will hit fundamental physical limits of computation and energy; as he says, even if we speed up computers, “in the end there are limits… We’d end up in the same place… There would be no singularity” (Technological singularity - Wikipedia). We also examine that Moore’s Law (computing doubling) has been slowing in recent years and is not guaranteed to continue indefinitely. Exponential trends often level off (S-curves).
Intelligence Is More Than Calculation: Critics like Jaron Lanier and Steven Pinker argue that human intelligence and consciousness involve qualities not captured by raw computing power or current algorithms (Technological singularity - Wikipedia). We discuss Pinker’s point that predictions of immanent superhuman AI have repeatedly failed and that such futurism might be more “myth” than science. Also, AI lacks genuine common sense understanding – progress in narrow domains (chess, Go, etc.) did not translate into general intelligence as once assumed.
Ethical and Social Complexities: Even if superintelligence were possible, we touch on the alignment problem (ensuring a super AI’s goals align with human values) and how Singularity enthusiasts gloss over this. Also, what about social acceptance? Societal, legal, and ethical “friction” could slow AI deployment regardless of tech capabilities.
Readings:
Paul Allen & Mark Greaves (2011), “The Singularity Isn’t Near” – read the section on “The Complexity Brake” (Paul Allen: The Singularity Isn’t Near | MIT Technology Review) (Paul Allen: The Singularity Isn’t Near | MIT Technology Review).
IEEE Spectrum: “AI’s Hard Limits” – article summarizing technical obstacles to human-level AI (e.g. energy efficiency of the brain vs silicon, complexity of human cognition, unpredictable research breakthroughs needed).
Steven Pinker, interview excerpt: “Tech Prophets and the Myth of Singularity” – Pinker’s skepticism in his own words (he famously said the Singularity is “a mirage” and explains why).
Optional: Melanie Mitchell (2019), “Artificial Intelligence: A Guide for Thinking Humans” (Chapter 9 “On Intelligence and Hype”) – accessible discussion debunking inflated claims about AI.
Assignments/Activities:
In-class: We compile a list on the board of assumptions vs. reality. Students help fill in: (Assumption) “Computing power = intelligence” vs. (Reality) “No, software and understanding the brain are the bottlenecks.” (Assumption) “An AI can recursively self-improve without limit” vs. “Physical limits and complexity might halt that.” (Assumption) “Intelligence will inevitably seek to expand” vs. “Human values/contexts are needed to define goals.” This exercise helps organize the debunking arguments.
In-class debate: “Will there be a Singularity by 2045?” – a formal debate where one team of students (perhaps those more convinced by the idea) defends the proposition and another team refutes it, using evidence from readings. The rest of the class and instructor will judge based on clarity and use of facts. This hones students’ ability to argue from evidence.
Post-debate summary: Instructor presents the consensus in AI research: most AI experts assign low probability to a near-term singularity, focusing instead on narrow AI challenges. Emphasize that being critical of Singularity hype is part of being a responsible AI practitioner – it keeps focus on real, pressing ethical issues rather than far-future speculation.
Homework (due Week 10): Position Paper: Each student writes a 2-page structured position paper on the Singularity question: “Do you believe an intelligence explosion is possible or likely? Why or why not?” They must support their position with at least two distinct arguments from the course (technical, ethical, historical precedent, etc.) and address one counter-argument. This is a short but formal essay to practice persuasive writing grounded in course content. (This can also double as a portion of the final essay grade.)
Week 10: Future of AI and Course Wrap-up; Student Project Presentations
Topics: In the final week, we shift from Singularity myths to the real future of AI and society in the coming decades, consolidating everything learned. We discuss likely developments in AI (what experts actually foresee in 5-10 years: more autonomous vehicles, AI in medicine, better language models, etc.) and the ethical challenges that will accompany them. Topics include AI and jobs (automation, the future of work), AI and privacy (ubiquitous surveillance vs. regulation), and environmental impacts of AI (energy use of large models). We highlight how the frameworks and critical thinking from this course can be applied to these emerging issues. Finally, the majority of class is devoted to student project presentations, sharing their case study findings.
Readings:
OECD Report: “AI: Scenarios and Actions for the Future” (skim) – outlines near-term AI trends and policy recommendations.
TechCrunch article: “Beyond the Hype: AI’s Real Challenges” – summarizes practical issues like bias, data privacy, and robustness that need tackling (a good concluding perspective).
No additional academic reading – students should focus on preparing presentations and reviewing course materials.
Assignments/Activities:
Student Project Presentations: Each project team will deliver a ~10-minute presentation on their chosen AI ethics case study or design scenario. They should cover: background on the AI system or application, the ethical issues involved, analysis through at least one framework, and their recommendations for addressing the issues (e.g. design changes, policy, or ethical guidelines for implementation). Visual aids (slides) are encouraged. After each presentation, we’ll have a brief Q&A. Attendance is mandatory – students will provide peer feedback and learn from each other’s analyses.
Discussion and Synthesis: After presentations, we’ll identify cross-cutting themes. Students are invited to reflect: What ethical principles came up again and again? How do chaos/complexity tie into real-world cases (unintended consequences)? How has their view of AI’s promise and peril changed since Week 1?
Course Wrap-up: Instructor wraps up with final thoughts – emphasizing that as future engineers, scientists, or informed citizens, the students have a responsibility to carry forward ethical reasoning in AI. The point is not to fear technology, but to guide it conscientiously. Students complete an anonymous course feedback survey. We also revisit the initial brainstorm from Week 1 to see how our answers have evolved.
Final Deliverables: The Final Project Report (a structured written report ~8-10 pages per team) is due by the end of finals week, incorporating any insights from the presentation Q&A. There is no final exam; the project serves as the culminating assessment.
Assignments and Assessment Methods
This course uses a variety of assessment methods designed for active learning and practical engagement. Instead of high-stakes exams on theory alone, you will be evaluated on essays, projects, and participation that encourage you to apply concepts thoughtfully. The breakdown is as follows:
Class Participation and Engagement – 15%: This includes attendance, involvement in discussions, and contribution to in-class activities. Students are expected to come prepared having done the readings and to actively engage in case study analyses, debates, and group work. Quality of participation is valued over quantity – thoughtful comments and respectful listening/responding in discussion.
Reflection Journals and Homework – 10%: Short reflections or analytical responses nearly every week (Weeks 1, 3, 8, etc.). These are informal or semi-formal write-ups (1-2 pages) where you grapple with the week’s material (e.g. personal takeaways on an ethical issue, reaction to the Singularity concept, etc.). These are graded on a completion and thoughtfulness basis (not heavy on grammar) to encourage honest exploration of ideas and connections to course content.
Structured Essays – 20%: Two structured essay assignments (approximately 4 pages each) will test your ability to form organized, well-supported arguments:
Essay 1 (Week 5): Ethical Frameworks in Action – applying utilitarian vs. deontological reasoning to an AI scenario, with a clear intro, body comparing the outcomes, and conclusion with your stance.
Essay 2 (Week 9): Position Paper on the Singularity – arguing for or against the likelihood of a Singularity, using course evidence and addressing counterpoints.
These essays are expected to have an introduction/thesis, coherent paragraphs, and a conclusion – they are not free-form opinion pieces but structured analyses. Grading will emphasize clarity of argument, use of evidence/readings, and understanding of ethical concepts.
Case Study Progress Report – 5%: A brief team report in Week 6-7 on your project’s status, sources, and preliminary analysis. This is to ensure you are on track and to give feedback early. It will be graded lightly (credit for completion and adequate effort).
Team Project (Case Study) – 30%: A semester-long group project where you investigate an AI application or incident in depth from an ethical perspective. This has several components:
Proposal (Week 4, ungraded feedback).
Progress Report (Week 7, 5% as noted above).
Presentation (Week 10, 10%): As a team, present your findings to the class. Graded on clarity, content, and effectiveness of communication. All team members must have a speaking role.
Written Report (Due Finals Week, 15%): A structured report detailing the case background, ethical analysis (identify issues, apply at least one framework or set of principles), and your recommendations. Should include references. This is graded on depth of analysis, application of course concepts, organization, and writing quality. Peer evaluations will be factored in to ensure fair individual grading (team members will rate each other’s contributions).
Final Quiz on Key Concepts – 10%: Instead of a final exam, there will be a short open-book quiz in Week 10 covering key definitions and concepts from the course (e.g. definitions of utilitarian vs. deontological, what is the butterfly effect, main argument of Chinese Room, etc.). This ensures accountability for learning core content. The quiz will be straightforward if you’ve been engaged (mostly short answer or multiple choice). The goal is to reinforce terminology and main ideas, not to surprise or trick you. (This quiz is minor in weight and more of a review exercise.)
Peer Feedback and Self-Assessment – 10%: At the end of the course, you will submit a brief self-assessment of your learning and contributions, and feedback on peers in your project group. This includes reflecting on what you learned, how you applied yourself, and evaluating your team experience. This helps you introspect on your development and helps the instructor calibrate project grades (exceptional teamwork or issues can be identified). Full credit is given for thoughtful completion.
Grading Scale: Standard A-F scale (90-100% = A range, etc.). Plus/minus grading is used. All assignments will have rubrics provided in advance, so expectations are clear. Late policy: assignments lose 10% of possible points per day late unless an extension is arranged in advance for valid reasons. Team project components should reflect collaborative effort; if issues arise, communicate with the instructor early.
Course Policies and Additional Notes
Prerequisites: None. This course is designed to be accessible to students from STEM fields without prior coursework in ethics or AI. Necessary technical concepts (like how an AI model works at a high level) will be explained in class or in readings. Mathematics will be minimal and used only for conceptual illustration (no homework math problems). All students are welcome – an open mind and willingness to engage is what you need.
Reading Load: The reading load is moderate (~30-50 pages per week, sometimes less) and mixes technical, philosophical, and popular writing. Focus on understanding the main ideas; we will review difficult portions in class. It’s okay if you don’t grasp every detail of a technical reading – extract the key point and we’ll fill gaps together.
Discussion Environment: We will often discuss contentious issues (bias, fatal accidents, futuristic ideas). Respect for others’ perspectives is absolutely required. Disagreement is fine – in fact, encouraged – but must be voiced civilly. Harassment or derogatory remarks have no place in our classroom. Let’s maintain a supportive environment where everyone can explore ideas freely.
Use of AI Tools: Since this class is about AI, you might wonder if you can use AI writing assistants or code tools. You may use them for brainstorming or research (for example, asking ChatGPT to explain a concept after you’ve done the readings), but you must not plagiarize and should not have an AI write your assignments. All writing must be in your own words to reflect your understanding. If you do use an AI tool for something specific (say, generating an outline), note that in your submission. When in doubt, ask the instructor. We’ll treat this as a learning opportunity rather than punitively, but academic honesty rules still apply.
Office Hours and Help: The instructor is available in office hours [Days/Times] or by appointment to discuss anything – be it clarifying course content, helping with an assignment idea, or just chatting about AI and careers. Teaching assistants [if any] will also hold office hours. We encourage you to seek help early if you feel confused or overwhelmed; we are here to support your success.
Why This Course Matters: As a concluding note in the syllabus – the topics we cover are highly relevant. AI technologies are rapidly advancing into every industry. Employers, regulators, and society are increasingly seeking STEM professionals who not only have technical savvy but also the ethical and big-picture understanding to guide these technologies responsibly ( Concepts of Ethics and Their Application to AI - PMC ) ( Concepts of Ethics and Their Application to AI - PMC ). This course will challenge you to think critically and ethically, a skillset that will serve you in any career path you choose. Expect to be challenged, but also expect to have exciting discussions about the future we’re all shaping. Let’s have a great term exploring AI ethics, chaos, and language together!
Comments