CategoriesArtificial Intelligence

What do I think about AGI

A few days ago, I chatted with a clever 16-year-old who, after learning what I do for work, asked me what I think about AGI (artificial general intelligence). I explained that I see AGI primarily as a cult, a narrative constructed by Silicon Valley actors that masks what is fundamentally about profit accumulation and power consolidation behind the veneer of building a "supernatural intelligence".

In the last few weeks, I have been avidly reading Karen Hao's book, Empire of AI, which articulates these mechanisms with clarity. A book I suggest everyone read to learn more about the current philosophy of scale, extraction, and technological imperialism, pioneered by Sam Altman and OpenAI.

I mentioned something the book emphasises: the hidden costs of generative AI that we systematically ignore. We imagine generative AI as a weightless technology floating in the cloud, at our fingertips and available whenever we want. Yet, we overlook its profound materiality and its devastating impact on marginalised communities.

I shared the example of data annotators in Kenya and Venezuela who are forced to process disturbing AI-generated content material describing violence and atrocities. Their psychological toll is real: many of these workers have developed post-traumatic stress and other serious mental health consequences. Their labour remains invisible, yet it is essential to every generative AI system we use.

The teenager was surprised. "Nobody talks about this", he said. "What you hear about is the existential threat, the futuristic - robots-taking-over-the-world - scenarios." His observation aligns with what Empire of AI argues: certain fictitious narratives about AI are deliberately amplified to obscure the real stories of people and natural resources consumed under the heavy weight of these technologies.

Then came his most honest admission: he uses GenAI in school, as do all his classmates. This confession did not really surprise me. Even if he feels it's making him intellectually dumber, he continues anyway.

This resonated with something I'd recently read: that educational institutions have a responsibility to prevent deskilling. Yet that's precisely what's happening with generative AI.

The question that haunts me now is how we cultivate critical thinking when the very tools designed to assist us are eroding our capacity to think independently. How do we resist a technology that promises convenience while dismantling the intellectual resilience we need?

CategoriesArtificial IntelligenceDigital learning

Impact of AI Act on Affective Computing

As we get closer to enacting the #AIAct, I want to share a few thoughts on banning #emotionrecognition on education applications.
 
While certainly moved from a good cause, this ban risks hindering much of the community's progress in affective computing in education. 
As my colleague @deniziren puts it:
"Computational services lacking empathy or emotion-aware capabilities are merely blunt tools. How can we hope to address the human-AI alignment problem without enabling AI to understand human emotions?"
https://www.linkedin.com/pulse/impact-ai-act-affective-computing-deniz-iren-phd-pmp/ 

Technology-assisted emotion recognition is helpful in various contexts, such as supporting people with autism spectrum disorder (ASD) or Asperger syndrome.
Emotion recognition is not the only proxy for users' identity; the same can be done with speech, physiological data, etc. Do we need to ban them all? What will this mean for education research?

There are techniques which we have been using that allow the use of emotion recognition while preserving user privacy (see here: https://link.springer.com/chapter/10.1007/978-3-031-16290-9_4 )

Ultimately, the technology is never the problem per se, but what is more problematic is how it is used and for which intention. So banning a certain technology, such as emotion recognition, also blocks good-intentioned initiatives.
CategoriesDigital learning

Creative ideas for using AI in Education


Just looked at this #OpenAccess book, which proposes a crowdsourced collection of tools for creatively using AI in education -> https://zenodo.org/record/8072950.

Almost all ideas revolve around genAI tools like ChatGPT, Midjourney, and DALLE2.

It was an exciting read, but I found no mention of adaptive learning, intelligent tutoring or learning analytics.