CategoriesConference article

Preserving Privacy in Multimodal Learning Analytics with Visual Animation of Kinematic Data

A recent study has been published that addresses the growing concern of data privacy in multimodal learning analytics (MMLA). The research investigates the potential of using visual animations as an alternative to traditional video recordings for analyzing sensitive data, particularly in educational settings.

MMLA involves collecting and analysing data from various sources, including video recordings, to gain insights into learning behaviours and outcomes. However, the use of video can raise significant privacy concerns, especially when it contains identifiable information about individuals. This has led to ethical dilemmas regarding using such data in research.

The study, based on the master thesis of Aleksandr Epp, introduces the Kinematic Animation Tool (KAT) to address these privacy issues. This tool allows researchers to visualise kinematic data without relying on video footage, thereby mitigating privacy risks. The KAT operates in a web browser, making it accessible and user-friendly for researchers in various environments.

The study involved a field experiment where participants annotated data sets using both animations and video recordings to assess the quality of the annotations. The results indicated that the inter-rater agreement between the two methods was high, suggesting that animations can serve as a viable alternative to videos in the data annotation process. This finding is significant as it demonstrates that the quality of data analysis can be maintained while enhancing privacy.

The successful integration of the KAT into existing multimodal data analysis frameworks suggests that researchers can conduct studies without the ethical concerns associated with video recordings. This approach not only protects participants' privacy but also encourages broader participation in MMLA research.

This study provides a valuable contribution to the ongoing discussion about data privacy in research. Demonstrating the effectiveness of visual animations in data analysis offers a practical solution for researchers looking to balance the need for quality insights with ethical considerations. As learning analytics continues to evolve, adopting, like the KAT, may be crucial in promoting responsible research practices.
In summary, visual animations represent a promising advancement in privacy-preserving data analysis, allowing researchers to explore learning behaviours while safeguarding participant information.

Full citation:

Di Mitri, D., Epp, A., Schneider, J. (2024). Preserving Privacy in Multimodal Learning Analytics with Visual Animation of Kinematic Data. In: Casalino, G., et al. Higher Education Learning Methodologies and Technologies Online. HELMeTO 2023. Communications in Computer and Information Science, vol 2076. Springer, Cham. https://doi.org/10.1007/978-3-031-67351-1_45

CategoriesConference article

New paper: A Human-centric Approach to Explain Evolving Data

A recent study led by my colleague Gabriella Casalino at the University of Bari highlights the importance of transparency and explainability in Machine Learning models used in educational environments. 

As we embrace this technological shift driven by AI in education, it is imperative to address the ethical considerations surrounding AI applications in educational settings. A recent study has underscored the critical importance of transparency and explainability in machine learning models utilized in educational environments.

At the forefront of this study is the introduction of DISSFCM, a dynamic incremental classification algorithm that harnesses the power of fuzzy logic to analyze and interpret students' interactions within learning platforms; by offering human-centric explanations, the research endeavours to deepen stakeholders' understanding of how AI models arrive at decisions in educational contexts.

One of the key strengths of the DISSFCM algorithm lies in its adaptability. It dynamically adjusts its model in response to changes in data, ensuring resilience and reliability in educational data analytics. This adaptability enhances the algorithm's performance and instills confidence in the insights derived from educational data.

Transparency and ethical standards are paramount in AI practices, particularly in educational settings. We can build trust and ensure fairness in deploying educational technologies by upholding these principles. The study sheds light on the evolving landscape of AI integration in education and emphasizes the pivotal role of explainable AI in fostering trust and understanding among stakeholders.

As we navigate the intersection of AI and education, prioritizing transparency and explainability will be instrumental in shaping a future where technology enhances learning experiences while upholding ethical standards. By embracing these principles, we can pave the way for a more transparent and accountable educational ecosystem powered by AI.

Reference to the article: 

G. Casalino, G. Castellano, D. Di Mitri, K. Kaczmarek-Majer and G. Zaza, "A Human-centric Approach to Explain Evolving Data: A Case Study on Education," 2024 IEEE International Conference on Evolving and Adaptive Intelligent Systems (EAIS), Madrid, Spain, 2024, pp. 1-8, doi: 10.1109/EAIS58494.2024.10569098.

The paper also got an award at the EAIS conference. 


No alt text provided for this imageNo alt text provided for this image