Hector Research Institute
Attention is a key factor for successful learning, with research indicating strong associations between (in)attention and learning outcomes. This dissertation advanced the field by focusing on the automated detection of attention-related processes using eye tracking, computer vision, and machine learning, offering a more objective, continuous, and scalable assessment than traditional methods such as self-reports or observations. It introduced novel computational approaches for assessing various dimensions of (in)attention in online and classroom learning settings and addressing the challenges of precise fine-granular assessment, generalizability, and in-the-wild data quality. First, this dissertation explored the automated detection of mind-wandering, a shift in attention away from the learning task. Aware and unaware mind wandering were distinguished employing a novel multimodal approach that integrated eye tracking, video, and physiological data. Further, the generalizability of scalable webcam-based detection across diverse tasks, settings, and target groups was examined. Second, this thesis investigated attention indicators during online learning. Eye-tracking analyses revealed significantly greater gaze synchronization among attentive learners. Third, it addressed attention-related processes in classroom learning by detecting hand-raising as an indicator of behavioral engagement using a novel view-invariant and occlusion-robust skeleton-based approach. This thesis advanced the automated assessment of attention-related processes within educational settings by developing and refining methods for detecting mind wandering, on-task behavior, and behavioral engagement. It bridges educational theory with advanced methods from computer science, enhancing our understanding of attention-related processes that significantly impact learning outcomes and educational practices.
Generative AI (GenAI) tools such as ChatGPT allow users, including school students without prior AI expertise, to explore and address a wide range of tasks. Surveys show that most students aged eleven and older already use these tools for school-related activities. However, little is known about how they actually use GenAI and how it impacts their learning. This study addresses this gap by examining middle school students ability to ask effective questions and critically evaluate ChatGPT responses, two essential skills for active learning and productive interactions with GenAI. 63 students aged 14 to 15 were tasked with solving science investigation problems using ChatGPT. We analyzed their interactions with the model, as well as their resulting learning outcomes. Findings show that students often over-relied on ChatGPT in both the question-asking and answer-evaluation phases. Many struggled to use clear questions aligned with task goals and had difficulty judging the quality of responses or knowing when to seek clarification. As a result, their learning performance remained moderate: their explanations of the scientific concepts tended to be vague, incomplete, or inaccurate, even after unrestricted use of ChatGPT. This pattern held even in domains where students reported strong prior knowledge. Furthermore, students self-reported understanding and use of ChatGPT were negatively associated with their ability to select effective questions and evaluate responses, suggesting misconceptions about the tool and its limitations. In contrast, higher metacognitive skills were positively linked to better QA-related skills. These findings underscore the need for educational interventions that promote AI literacy and foster question-asking strategies to support effective learning with GenAI.
There are no more papers matching your filters at the moment.