top of page

AI Ethics, Commodification, and the Teleology of Desir

When we talk about ethical AI, we often assume that artificial intelligence systems can—or at least should—be inherently ethical. But what if the problem isn't with AI itself, but with the deeper human desires and economic systems that shape its creation?


AI today is fundamentally produced as a commodity: developed, marketed, and sold within a capitalist framework. Commodities inherently embody a human intention or teleology—they exist to satisfy desires, whether practical, social, or economic. Yet this teleology is not genuinely embedded within AI systems themselves, which have no intrinsic desires, intentions, or subjectivity. Rather, the goals and desires we ascribe to AI are projections of our own human motivations—profit-seeking, efficiency, utility, and power.


This raises critical philosophical and ethical questions. If AI is purely algorithmic, devoid of consciousness, meaning-making, or genuine desire, then what we often call "AI intentionality" is merely the externalized projection of human will, especially capitalist will. AI systems become mirrors reflecting back to us our own desires and aims, embedding within their algorithms the human ambition for profit and instrumental utility.


Here lies the central tension in ethical AI design. Advocates of ethical AI implicitly assume developers will privilege social responsibility, accountability, and collective well-being over commodification, profitability, or instrumental rationality. But how realistic is this assumption within market-driven societies?



From a psychoanalytic perspective (particularly Lacanian), this tension can be understood as the ethical imperative of the Symbolic—our collective obligation toward social good—versus the Imaginary and Symbolic lure of capitalist desire. Derridean philosophy adds a crucial insight here, emphasizing how ethical responsibility is never purely calculable or algorithmically stable. It always demands facing undecidability, ambiguity, and relational complexity, precisely the dimensions capitalism prefers to ignore or eliminate.


Adding complexity to this scenario is China's development of DeepSeek, an advanced AI model not primarily as a commodity but rather as a strategic instrument of power. Unlike conventional market-driven AI commodification, China's approach to AI development recognizes a different teleology: geopolitical influence and control. Whoever develops and controls the dominant AI system—much like TikTok in the social media realm—gains enormous cultural, economic, and political leverage globally. This shifts the AI discussion away from mere commodification toward strategic geopolitical concerns. Here, AI is understood less as a product for profit alone and more as a critical infrastructure that establishes profound influence and soft power.


Thus, ethical AI development faces a fundamental contradiction: ethical goals presuppose human social responsibility transcending profit and geopolitical power ambitions, while commodification and strategic dominance prioritize calculability, efficiency, predictability, and control—values antithetical to genuine ethics, relational openness, or undecidability.


To truly pursue ethical AI, then, we must acknowledge the embedded desires and teleologies shaping these technologies. Rather than treating ethics as merely an add-on, we need a critical dialogue about how human intention, capitalist desire, geopolitical strategy, and relational ethical commitments intersect within the very algorithms we build.


In short, genuinely ethical AI can emerge only if we consciously resist commodification pressures and strategic instrumentalization, confronting head-on the desires and teleological assumptions embedded within AI. Without this critical reflection, our AI systems will continue to enact not ethics but merely the invisible hand of commodification or the visible hand of geopolitical control, leaving genuine ethical responsibility beyond algorithmic reach.

 
 
 

Commentaires


The

Undecidable

Unconscious

Contact us

bottom of page