CategoriesMultimodal Tutor

Detecting Mistakes in CPR Training with Multimodal Data and Neural Networks

This journal article, the fourth and  main experiment of my PhD thesis was published in gold Open Access in the MDPI Sensors journal as part of the Special Issue Advanced Sensors Technology in Education by Daniele Di Mitri,Jan Schneider, Marcus Specht and Hendrik Drachsler  Image

Abstract

This study investigated to what extent multimodal data can be used to detect mistakes during Cardiopulmonary Resuscitation (CPR) training.

We complemented the Laerdal QCPR ResusciAnne manikin with the Multimodal Tutor for CPR, a multi-sensor system consisting of a Microsoft Kinect for tracking body position and a Myo armband for collecting electromyogram information. We collected multimodal data from 11 medical students, each of them performing two sessions of two-minute chest compressions (CCs). We gathered in total 5254 CCs that were all labelled according to five performance indicators, corresponding to common CPR training mistakes. Three out of five indicators, CC rate, CC depth and CC release, were assessed automatically by the ReusciAnne manikin. The remaining two, related to arms and body position, were annotated manually by the research team. We trained five neural networks for classifying each of the five indicators. The results of the experiment show that multimodal data can provide accurate mistake detection as compared to the ResusciAnne manikin baseline. We also show that the Multimodal Tutor for CPR can detect additional CPR training mistakes such as the correct use of arms and body weight. Thus far, these mistakes were identified only by human instructors. Finally, to investigate user feedback in the future implementations of the Multimodal Tutor for CPR, we conducted a questionnaire to collect valuable feedback aspects of CPR training. 

View Full-Text

Keywords: multimodal dataneural networkspsychomotor learningtraining mistakesmedical simulationlearning analyticssignal processingactivity recognitionsensors

Published by Daniele Di Mitri

Daniele Di Mitri is a professor of Multimodal Learning Technologies at the German University of Digital Science. At the German UDS, he leads the research group "Augmented Feedback" and coordinates the master's in Advanced Digital Realities.  He is an associated researcher at the DIPF - Leibniz Institute for Research and Information in Education and a lecturer at the Goethe University of Frankfurt, Germany. Daniele Di Mitri received his PhD in Learning Analytics and Wearable Sensor Support from the Open University of the Netherlands. His current research focuses on developing AI-driven, multimodal learning technologies to enhance digital education. It aims to create innovative, responsible solutions that improve learning experiences through advanced feedback systems and ethical integration of technology. He is a "Johanna Quandt Young Academy" fellow and was elected "AI Newcomer 2021" at the KI Camp by the German Informatics Society. He is a member of the CrossMMLA, a special interest group of the Society for Learning Analytics Research, and the chair of the special interest group on AI for Education of the European Association for Technology-Enhanced Learning.

Leave a Reply

Your email address will not be published. Required fields are marked *