CategoriesConference article

Preserving Privacy in Multimodal Learning Analytics with Visual Animation of Kinematic Data

A recent study has been published that addresses the growing concern of data privacy in multimodal learning analytics (MMLA). The research investigates the potential of using visual animations as an alternative to traditional video recordings for analyzing sensitive data, particularly in educational settings.

MMLA involves collecting and analysing data from various sources, including video recordings, to gain insights into learning behaviours and outcomes. However, the use of video can raise significant privacy concerns, especially when it contains identifiable information about individuals. This has led to ethical dilemmas regarding using such data in research.

The study, based on the master thesis of Aleksandr Epp, introduces the Kinematic Animation Tool (KAT) to address these privacy issues. This tool allows researchers to visualise kinematic data without relying on video footage, thereby mitigating privacy risks. The KAT operates in a web browser, making it accessible and user-friendly for researchers in various environments.

The study involved a field experiment where participants annotated data sets using both animations and video recordings to assess the quality of the annotations. The results indicated that the inter-rater agreement between the two methods was high, suggesting that animations can serve as a viable alternative to videos in the data annotation process. This finding is significant as it demonstrates that the quality of data analysis can be maintained while enhancing privacy.

The successful integration of the KAT into existing multimodal data analysis frameworks suggests that researchers can conduct studies without the ethical concerns associated with video recordings. This approach not only protects participants' privacy but also encourages broader participation in MMLA research.

This study provides a valuable contribution to the ongoing discussion about data privacy in research. Demonstrating the effectiveness of visual animations in data analysis offers a practical solution for researchers looking to balance the need for quality insights with ethical considerations. As learning analytics continues to evolve, adopting, like the KAT, may be crucial in promoting responsible research practices.
In summary, visual animations represent a promising advancement in privacy-preserving data analysis, allowing researchers to explore learning behaviours while safeguarding participant information.

Full citation:

Di Mitri, D., Epp, A., Schneider, J. (2024). Preserving Privacy in Multimodal Learning Analytics with Visual Animation of Kinematic Data. In: Casalino, G., et al. Higher Education Learning Methodologies and Technologies Online. HELMeTO 2023. Communications in Computer and Information Science, vol 2076. Springer, Cham. https://doi.org/10.1007/978-3-031-67351-1_45

CategoriesConference article

New paper: A Human-centric Approach to Explain Evolving Data

A recent study led by my colleague Gabriella Casalino at the University of Bari highlights the importance of transparency and explainability in Machine Learning models used in educational environments. 

As we embrace this technological shift driven by AI in education, it is imperative to address the ethical considerations surrounding AI applications in educational settings. A recent study has underscored the critical importance of transparency and explainability in machine learning models utilized in educational environments.

At the forefront of this study is the introduction of DISSFCM, a dynamic incremental classification algorithm that harnesses the power of fuzzy logic to analyze and interpret students' interactions within learning platforms; by offering human-centric explanations, the research endeavours to deepen stakeholders' understanding of how AI models arrive at decisions in educational contexts.

One of the key strengths of the DISSFCM algorithm lies in its adaptability. It dynamically adjusts its model in response to changes in data, ensuring resilience and reliability in educational data analytics. This adaptability enhances the algorithm's performance and instills confidence in the insights derived from educational data.

Transparency and ethical standards are paramount in AI practices, particularly in educational settings. We can build trust and ensure fairness in deploying educational technologies by upholding these principles. The study sheds light on the evolving landscape of AI integration in education and emphasizes the pivotal role of explainable AI in fostering trust and understanding among stakeholders.

As we navigate the intersection of AI and education, prioritizing transparency and explainability will be instrumental in shaping a future where technology enhances learning experiences while upholding ethical standards. By embracing these principles, we can pave the way for a more transparent and accountable educational ecosystem powered by AI.

Reference to the article: 

G. Casalino, G. Castellano, D. Di Mitri, K. Kaczmarek-Majer and G. Zaza, "A Human-centric Approach to Explain Evolving Data: A Case Study on Education," 2024 IEEE International Conference on Evolving and Adaptive Intelligent Systems (EAIS), Madrid, Spain, 2024, pp. 1-8, doi: 10.1109/EAIS58494.2024.10569098.

The paper also got an award at the EAIS conference. 


No alt text provided for this imageNo alt text provided for this image

 

CategoriesPresentations

Invited talk at the University of The Philippines

 

On June 19th, I was invited to give an online talk at the University of the Philippines. The title of my talk was "Intelligent Tutors, Learning Analytics, and Multimodal Technologies," and it served as the kickoff guest lecture for the webinar series hosted by the Intelligent Systems Center of the University of the Philippines. At its peak, the lecture had over 170 participants connected online.

During the talk, I discussed how learners in the twenty-first century need continuous instruction and timely feedback to develop their competencies. In situations where human experts are not readily available, Artificial Intelligence (AI) systems can offer automatic, personalized, and real-time feedback to learners in distance learning settings. This allows learners to practice at their own pace while receiving continuous feedback. Moreover, AI feedback can extend beyond traditional cognitive tasks to provide input on physical learning tasks by integrating with immersive and multimodal technologies such as Augmented and Virtual Reality (AR/VR) or sensor-based systems.

I summarized the main insights of my research in AI in education and Multimodal Learning Analytics (MMLA), introducing the concept of "Multimodal Tutors". I demonstrated how MMLA can support distance teaching and learning with personalized feedback and adaptation. Through relevant use cases, I illustrated how AI and immersive technologies can be used to enhance feedback. Finally, I presented my research agenda for augmenting feedback with AI and how it can provide personalized and adaptive support to learners and teachers.

CategoriesJournal article

From the Automated Assessment of Student Essay Content to Highly Informative Feedback: a Case Study

How can we provide students with highly informative feedback on their essays using natural language processing?

Check out our new paper, led by Sebastian Gombert, where we present a case study on using GBERT and T5 models to generate feedback for educational psychology students.

In this paper:

➡ We implemented a two-step pipeline that segments the essays and predicts codes from the segments. The codes are used to generate feedback texts that inform the students about the correctness of their solutions and the content areas they need to improve.

➡ We used 689 manually labelled essays as training data for our models. We compared GBERT, T5, and bag-of-words baselines for scoring the segments and the codes. The results showed that the transformer-based models outperformed the baselines in both steps.

➡ We evaluated the feedback using a randomised controlled trial. The control group received essential feedback, while the treatment group received highly informative feedback based on our pipeline. We used a six-item survey to measure the perception of feedback.

➡ We found that highly informative feedback had positive effects on helpfulness and reflection. The students in the treatment group reported higher levels of satisfaction, usefulness, and learning than the students in the control group.

➡ Our paper demonstrates the potential of natural language processing for providing highly informative feedback on student essays. We hope that our work will inspire more research and practice in this area.

You can read the full paper here.

https://link.springer.com/article/10.1007/s40593-023-00387-6