Digital Library NAES of Ukraine

Analysis of neural network hyperparameters for predicting use rgaze direction in adaptive learning systems

- Barkovska, Olesia (orcid.org/0000-0001-7496-4353), Liapin, Yaroslav (orcid.org/0009-0008-9881-733X), Ruban, Igor (orcid.org/0000-0002-4738-3286), Rosinskiy, Dmytro (orcid.org/0000-0002-0725-392X) and Tkachov, Vitalii (orcid.org/0000-0002-6524-9937) (2025) Analysis of neural network hyperparameters for predicting use rgaze direction in adaptive learning systems Information Technologies and Learning Tools, 4 (108). pp. 263-278. ISSN 2076-8184

[thumbnail of Olesia Barkovska.pdf] Text
Olesia Barkovska.pdf - Published Version

Download (815kB)

Abstract

This study explores the application of convolutional neural networks (CNNs) for real-time gaze direction prediction within adaptive learning environments. In response to the increasing demand for personalized education, the ability to monitor learners' attention presents novel opportunities for dynamic content adaptation in learning management systems (LMS) and training management systems (TMS). The research evaluates the performance of four CNN architectures – LeNet, AlexNet, VGGNet, and ResNet – under varying hyperparameter configurations, including batch size, optimizer type, and activation function. Experiments employ the synthetic UnityEyes dataset alongside a custom test set comprising video recordings of actual user behavior. The results indicate that ResNet and VGGNet achieve the highest accuracy (up to 85%) and the lowest loss values when combined with Swish or GELU activation functions and AdamW or SGD optimizers. These findings underscore the importance of selecting appropriate architectures and hyperparameters for effective gaze-tracking applications. Furthermore, the study proposes a novel method for determining the user's area of visual attention through gaze vector analysis, implemented using TensorFlow, Keras, OpenCV, and Mediapipe. This approach enables real-time adaptation of learning content – such as prompts, interface modifications, or supplementary explanations – based on observed visual activity. In contrast to traditional static methods, the proposed solution enables dynamic personalization of the learning experience. A comparative analysis of model accuracy and training efficiency demonstrates the potential of gaze-based systems to enhance inclusive educational technologies. Future research will aim to improve system robustness against variations in lighting, head orientation, and partial facial occlusion, as well as to integrate gaze-controlled content navigation modules tailored for learners with special educational needs.

Item Type: Article
Keywords: gaze direction; adaptive learning; CNN; personalization; machine learning; LMS.
Subjects: Science and knowledge. Organization. Computer science. Information. Documentation. Librarianship. Institutions. Publications > 00 Prolegomena. Fundamentals of knowledge and culture. Propaedeutics > 004 Computer science and technology. Computing. Data processing > 004.01/.08 Special auxiliary subdivision for computing > 004.03 System types and characteristics
Science and knowledge. Organization. Computer science. Information. Documentation. Librarianship. Institutions. Publications > 00 Prolegomena. Fundamentals of knowledge and culture. Propaedeutics > 004 Computer science and technology. Computing. Data processing > 004.9 Application-oriented computer-based techniques > 004.93 Pattern information processing
Science and knowledge. Organization. Computer science. Information. Documentation. Librarianship. Institutions. Publications > 3 Social Sciences > 37 Education > 37.01/.09 Special auxiliary table for theory, principles, methods and organization of education > 37.04 Education in relation to the educand, pupil. Guidance
Divisions: Institute for Digitalisation of Education > Generic resouse
Depositing User: Алла 1 Алла Почтарьова
Date Deposited: 23 Dec 2025 16:59
Last Modified: 23 Dec 2025 16:59
URI: https://lib.iitta.gov.ua/id/eprint/747849

Downloads

Downloads per month over past year

Actions (login required)

View Item View Item