CategoriesResults

A controlled way to better teaching and learning with AI

"A controlled way to better teaching and learning with AI" is my latest interview on the DIPF blog, where I talk about the "alignment problem" in education and what my team and I plan to do in the HyTea project.

Interview by Philip Stirm for DIPFblog

Ein kontrollierter Weg zu besserem Lehren und Lernen mit KI

Artificial intelligence (AI) has the potential to support teaching and learning in many automated ways. However, the contributions of the new technology do not always match the expectations and values of human users. The research and development project „HyTea – Model for Hybrid Teaching“ is investigating how this problem can be addressed. In the interview, project leader Dr. Daniele Di Mitri explains in more detail the project and how he and his team are proceeding.

The so-called alignment problem plays an important role in Artificial Intelligence ethics research and the development of corresponding tools. Can you explain what this problem is about?

The alignment problem is an open challenge in AI development and research; it connects to the much more discussed problem of “human control” of AI. Generally, AI systems are autonomous agents that must fulfil a predefined function; for example, they have to navigate from point A to point B. The relevant question is to what extent and under what conditions the AI agent will execute the predefined plan. And while doing so, how do we ensure that the AI remains within ethical and moral boundaries? We already know that AI does not have a moral compass or a predefined definition of right or wrong. Therefore, it needs precise guardrails to ensure it does not harm humans.

This discussion has been largely explored and has become a source of imagination, from the famous short story “I, Robot” by Isaac Asimov to Hollywood movies such as “Space Odyssey”. But only recently has the alignment problem become a hot topic in the AI community. With the discovery of new capabilities of generative models such as GPT, the community is now discussing to what extent AI systems can act autonomously and how far humans have to remain in control. Several influential voices about AI, most of whom gravitate around Silicon Valley, speculate about this issue, saying that AI poses a serious existential threat to humanity and that we will soon have to deal with deadly machines with superhuman capabilities. I draw a line here, as I am aware of the real model’s capabilities, and I believe that further speculation about AI doom scenarios is purely science fiction.

I believe it is more important to discuss and research AI’s ramifications for society and the threats it poses today – not those of tomorrow. AI systems already closely integrated in various social contexts which can have profound consequences. I am referring, for example, to the systematic discrimination of women or people of colour, the propaganda that generative AI can foster and spread, the generation of fake news, the infringement of intellectual copyright, the power consumption by large AI models and their carbon footprints and other things like that.

In what specific context do you deal with this issue in your project?

In the HyTea project, we deal with the alignment problem in education. We are particularly asking how teachers and learners can maintain control of the AI systems and how they can correct potential failures. In education, AI systems often work as “expert software”, i.e., systems that provide feedback, guidance, and personalised support to the learners when the human teacher is distant or unavailable. For this reason, AI is in the position of the more knowledgeable counterpart; hence, it can be difficult for the user to detect whether the AI is giving erroneous or wrong recommendations or feedback.

In our project, we focus on helping people to better prepare presentations with the support of an AI system, the “AI- Presentation Trainer”. This system can guide the students into composing meaningful presentations and allows them to prepare optimally for the actual live presentation. It is a camera-based system that thereby can automatically detect common mistakes such as crossing the arms, looking and pointing back at the slides, not using enough breaks and pauses and many others. With my colleague Dr. Jan Schneider, I have already conducted quite some research on this topic, and we figured that systems like this can be highly beneficial for students, especially in higher education. While students are asked to prepare and deliver presentations in their courses, typically, they are not offered the chance to train their oral presentation skills. Using intelligent software like the Presentation Trainer leads to more practice and, therefore, inevitably to better presentation performance.

As part of the HyTea project, we are exploring how we can integrate tools like the AI Presentation Trainer into existing higher education courses. To tackle the alignment problem, we want to create a Teacher Dashboard to summarise the student’s interaction with the Presentation Trainer and allow the teacher to correct the AI-generated feedback and precisely integrate it. Furthermore, we want to investigate to what extent students improve their presentation skills when they follow a guided procedure to prepare the content of their presentations. We are also working on increasing teachers’ and students’ acceptance of software like the Presentation Trainer and identifying optimal practices for using this software in a course.

close up salesman employee hand using stylus pen to pointing on tablet screen to show company profit monthly in the meeting event at conference room , business strategy concept

What exactly are you planning to do?

As the project’s first step, we interviewed a pool of presentation training and public speaking experts, from which we collected valuable information on how to improve presentation training software. Interviewing experts and collecting requirements is part of our participatory design method to achieve a human-centred and responsible AI design. To this end, each stakeholder’s opinions are seriously considered in the design and development process.

In the following steps, we plan to improve the Presentation Trainer iteratively in existing seminars that are given to informatic bachelor and master students at Goethe University Frankfurt. For three consecutive semesters, we will roll out a repeated study in which we ask the students to prepare their initial, mid-term and final presentations with the Presentation Trainer and to make suggestions for more features. We will record and rate the presentations created in this way, and through a participants questionnaire, we want to collect students‘ opinions concerning the system’s usability and ease of use. In this way, we aim to see in more detail which features of the AI- Presentation Trainer positively correlate with the presentation performance.

In this joint study, doctoral candidate Nina Mouhammad will investigate primarily the relevance of properly selecting and composing the advices for the students and the presentation content. Doctoral candidate Stefan Hummel will investigate which features lead to better usability and user acceptance of the AI- Presentation Trainer.

What do you want to achieve as a result and how can the general public benefit from this?

We aim to develop the new system within the digital learning ecosystem used by DIPF and Goethe University’s eLearning facility “StudiumDigitale”. This ecosystem heavily relies on digital learning platforms like Moodle, which combines learning in presence with digital learning. We aim to smoothly integrate the new Presentation Trainer to seamlessly complement the existing hybrid and flexible teaching and learning model. It combines physical learning with digital learning tools.

The new AI- Presentation Trainer will become an open-source software for any institution that wishes to host it and make it available for their students and staff. We are taking user privacy more than seriously by developing the system so that each institution can ensure that its data remains secure and is not shared with other parties without explicit consent. Ideally, interested institutions or individual teachers integrate our software in their courses. This could lead to an AI system explicitly designed for education which becomes widely adopted and accepted by students, teachers and educational institutions.

Thank you for the overview!