empire of AI book cover
CategoriesArtificial Intelligence

What do I think about AGI

A few days ago, I chatted with a clever 16-year-old who, after learning what I do for work, asked me what I think about AGI (artificial general intelligence). I explained that I see AGI primarily as a cult, a narrative constructed by Silicon Valley actors that masks what is fundamentally about profit accumulation and power consolidation behind the veneer of building a "supernatural intelligence".

In the last few weeks, I have been avidly reading Karen Hao's book, Empire of AI, which articulates these mechanisms with clarity. A book I suggest everyone read to learn more about the current philosophy of scale, extraction, and technological imperialism, pioneered by Sam Altman and OpenAI.

I mentioned something the book emphasises: the hidden costs of generative AI that we systematically ignore. We imagine generative AI as a weightless technology floating in the cloud, at our fingertips and available whenever we want. Yet, we overlook its profound materiality and its devastating impact on marginalised communities.

I shared the example of data annotators in Kenya and Venezuela who are forced to process disturbing AI-generated content material describing violence and atrocities. Their psychological toll is real: many of these workers have developed post-traumatic stress and other serious mental health consequences. Their labour remains invisible, yet it is essential to every generative AI system we use.

The teenager was surprised. "Nobody talks about this", he said. "What you hear about is the existential threat, the futuristic - robots-taking-over-the-world - scenarios." His observation aligns with what Empire of AI argues: certain fictitious narratives about AI are deliberately amplified to obscure the real stories of people and natural resources consumed under the heavy weight of these technologies.

Then came his most honest admission: he uses GenAI in school, as do all his classmates. This confession did not really surprise me. Even if he feels it's making him intellectually dumber, he continues anyway.

This resonated with something I'd recently read: that educational institutions have a responsibility to prevent deskilling. Yet that's precisely what's happening with generative AI.

The question that haunts me now is how we cultivate critical thinking when the very tools designed to assist us are eroding our capacity to think independently. How do we resist a technology that promises convenience while dismantling the intellectual resilience we need?

Published by Daniele Di Mitri

Daniele Di Mitri is a professor of Multimodal Learning Technologies at the German University of Digital Science. At the German UDS, he leads the research group "Augmented Feedback" and coordinates the master's in Advanced Digital Realities.  He is an associated researcher at the DIPF - Leibniz Institute for Research and Information in Education and a lecturer at the Goethe University of Frankfurt, Germany. Daniele Di Mitri received his PhD in Learning Analytics and Wearable Sensor Support from the Open University of the Netherlands. His current research focuses on developing AI-driven, multimodal learning technologies to enhance digital education. It aims to create innovative, responsible solutions that improve learning experiences through advanced feedback systems and ethical integration of technology. He is a "Johanna Quandt Young Academy" fellow and was elected "AI Newcomer 2021" at the KI Camp by the German Informatics Society. He is a member of the CrossMMLA, a special interest group of the Society for Learning Analytics Research, and the chair of the special interest group on AI for Education of the European Association for Technology-Enhanced Learning.

Leave a Reply

Your email address will not be published. Required fields are marked *