empire of AI book cover
CategoriesArtificial Intelligence

What do I think about AGI

A few days ago, I chatted with a clever 16-year-old who, after learning what I do for work, asked me what I think about AGI (artificial general intelligence). I explained that I see AGI primarily as a cult, a narrative constructed by Silicon Valley actors that masks what is fundamentally about profit accumulation and power consolidation behind the veneer of building a "supernatural intelligence".

In the last few weeks, I have been avidly reading Karen Hao's book, Empire of AI, which articulates these mechanisms with clarity. A book I suggest everyone read to learn more about the current philosophy of scale, extraction, and technological imperialism, pioneered by Sam Altman and OpenAI.

I mentioned something the book emphasises: the hidden costs of generative AI that we systematically ignore. We imagine generative AI as a weightless technology floating in the cloud, at our fingertips and available whenever we want. Yet, we overlook its profound materiality and its devastating impact on marginalised communities.

I shared the example of data annotators in Kenya and Venezuela who are forced to process disturbing AI-generated content material describing violence and atrocities. Their psychological toll is real: many of these workers have developed post-traumatic stress and other serious mental health consequences. Their labour remains invisible, yet it is essential to every generative AI system we use.

The teenager was surprised. "Nobody talks about this", he said. "What you hear about is the existential threat, the futuristic - robots-taking-over-the-world - scenarios." His observation aligns with what Empire of AI argues: certain fictitious narratives about AI are deliberately amplified to obscure the real stories of people and natural resources consumed under the heavy weight of these technologies.

Then came his most honest admission: he uses GenAI in school, as do all his classmates. This confession did not really surprise me. Even if he feels it's making him intellectually dumber, he continues anyway.

This resonated with something I'd recently read: that educational institutions have a responsibility to prevent deskilling. Yet that's precisely what's happening with generative AI.

The question that haunts me now is how we cultivate critical thinking when the very tools designed to assist us are eroding our capacity to think independently. How do we resist a technology that promises convenience while dismantling the intellectual resilience we need?

Published by Daniele Di Mitri

Daniele Di Mitri is a research group leader at the DIPF - Leibniz Institute for Research and Information in Education and a lecturer at the Goethe University of Frankfurt, Germany. Daniele received his PhD entitled "The Multimodal Tutor" at the Open University of The Netherlands (2020) in Learning Analytics and wearable sensor support. His research focuses on collecting and analysing multimodal data during physical interactions for automatic feedback and human behaviour analysis. Daniele's current research focuses on designing responsible Artificial Intelligence applications for education and human support. He is a "Johanna Quandt Young Academy" fellow and was elected "AI Newcomer 2021" at the KI Camp by the German Informatics Society. He is a member of the editorial board of Frontiers in Artificial Intelligence journal, a member of the CrossMMLA, a special interest group of the Society of Learning Analytics Research, and chair of the Learning Analytics Hackathon (LAKathon) series.

Leave a Reply

Your email address will not be published. Required fields are marked *