- Barkovska, Olesia (orcid.org/0000-0001-7496-4353), Liapin, Yaroslav (orcid.org/0009-0008-9881-733X), Ruban, Igor (orcid.org/0000-0002-4738-3286), Rosinskiy, Dmytro (orcid.org/0000-0002-0725-392X) and Tkachov, Vitalii (orcid.org/0000-0002-6524-9937) (2025) Analysis of neural network hyperparameters for predicting use rgaze direction in adaptive learning systems Information Technologies and Learning Tools, 4 (108). pp. 263-278. ISSN 2076-8184
|
Text
Olesia Barkovska.pdf - Published Version Download (815kB) |
Abstract
This study explores the application of convolutional neural networks (CNNs) for real-time gaze direction prediction within adaptive learning environments. In response to the increasing demand for personalized education, the ability to monitor learners' attention presents novel opportunities for dynamic content adaptation in learning management systems (LMS) and training management systems (TMS). The research evaluates the performance of four CNN architectures – LeNet, AlexNet, VGGNet, and ResNet – under varying hyperparameter configurations, including batch size, optimizer type, and activation function. Experiments employ the synthetic UnityEyes dataset alongside a custom test set comprising video recordings of actual user behavior. The results indicate that ResNet and VGGNet achieve the highest accuracy (up to 85%) and the lowest loss values when combined with Swish or GELU activation functions and AdamW or SGD optimizers. These findings underscore the importance of selecting appropriate architectures and hyperparameters for effective gaze-tracking applications. Furthermore, the study proposes a novel method for determining the user's area of visual attention through gaze vector analysis, implemented using TensorFlow, Keras, OpenCV, and Mediapipe. This approach enables real-time adaptation of learning content – such as prompts, interface modifications, or supplementary explanations – based on observed visual activity. In contrast to traditional static methods, the proposed solution enables dynamic personalization of the learning experience. A comparative analysis of model accuracy and training efficiency demonstrates the potential of gaze-based systems to enhance inclusive educational technologies. Future research will aim to improve system robustness against variations in lighting, head orientation, and partial facial occlusion, as well as to integrate gaze-controlled content navigation modules tailored for learners with special educational needs.
Downloads
Downloads per month over past year
Actions (login required)
![]() |
View Item |


