CategoriesConference article

New paper: A Human-centric Approach to Explain Evolving Data

A recent study led by my colleague Gabriella Casalino at the University of Bari highlights the importance of transparency and explainability in Machine Learning models used in educational environments. 

As we embrace this technological shift driven by AI in education, it is imperative to address the ethical considerations surrounding AI applications in educational settings. A recent study has underscored the critical importance of transparency and explainability in machine learning models utilized in educational environments.

At the forefront of this study is the introduction of DISSFCM, a dynamic incremental classification algorithm that harnesses the power of fuzzy logic to analyze and interpret students' interactions within learning platforms; by offering human-centric explanations, the research endeavours to deepen stakeholders' understanding of how AI models arrive at decisions in educational contexts.

One of the key strengths of the DISSFCM algorithm lies in its adaptability. It dynamically adjusts its model in response to changes in data, ensuring resilience and reliability in educational data analytics. This adaptability enhances the algorithm's performance and instills confidence in the insights derived from educational data.

Transparency and ethical standards are paramount in AI practices, particularly in educational settings. We can build trust and ensure fairness in deploying educational technologies by upholding these principles. The study sheds light on the evolving landscape of AI integration in education and emphasizes the pivotal role of explainable AI in fostering trust and understanding among stakeholders.

As we navigate the intersection of AI and education, prioritizing transparency and explainability will be instrumental in shaping a future where technology enhances learning experiences while upholding ethical standards. By embracing these principles, we can pave the way for a more transparent and accountable educational ecosystem powered by AI.

Reference to the article: 

G. Casalino, G. Castellano, D. Di Mitri, K. Kaczmarek-Majer and G. Zaza, "A Human-centric Approach to Explain Evolving Data: A Case Study on Education," 2024 IEEE International Conference on Evolving and Adaptive Intelligent Systems (EAIS), Madrid, Spain, 2024, pp. 1-8, doi: 10.1109/EAIS58494.2024.10569098.

The paper also got an award at the EAIS conference. 


No alt text provided for this imageNo alt text provided for this image

 

Published by Daniele Di Mitri

Daniele Di Mitri is a research group leader at the DIPF - Leibniz Institute for Research and Information in Education and a lecturer at the Goethe University of Frankfurt, Germany. Daniele received his PhD entitled "The Multimodal Tutor" at the Open University of The Netherlands (2020) in Learning Analytics and wearable sensor support. His research focuses on collecting and analysing multimodal data during physical interactions for automatic feedback and human behaviour analysis. Daniele's current research focuses on designing responsible Artificial Intelligence applications for education and human support. He is a "Johanna Quandt Young Academy" fellow and was elected "AI Newcomer 2021" at the KI Camp by the German Informatics Society. He is a member of the editorial board of Frontiers in Artificial Intelligence journal, a member of the CrossMMLA, a special interest group of the Society of Learning Analytics Research, and chair of the Learning Analytics Hackathon (LAKathon) series.

Leave a Reply

Your email address will not be published. Required fields are marked *