Paper 3
Paper 3
Abstract: The growing popularity of online learning brings with it inherent challenges
that must be addressed, particularly in enhancing teaching effectiveness. Artificial intelli-
gence (AI) offers potential solutions by identifying learning gaps and providing targeted
improvements. However, to ensure their reliability and effectiveness in educational con-
texts, AI models must be rigorously evaluated. This study aimed to evaluate the perfor-
mance and reliability of an AI model designed to identify the characteristics and indica-
tors of engaging teaching videos. The research employed a design-based approach, incor-
porating statistical analysis to evaluate the AI model’s accuracy by comparing its assess-
ments with expert evaluations of teaching videos. Multiple metrics were employed, in-
cluding Cohen’s Kappa, Bland–Altman analysis, the Intraclass Correlation Coefficient
(ICC), and Pearson/Spearman correlation coefficients, to compare the AI model’s results
with those of the experts. The findings indicated low agreement between the AI model’s
assessments and those of the experts. Cohen’s Kappa values were low, suggesting mini-
mal categorical agreement. Bland–Altman analysis showed moderate variability with sub-
stantial differences in results, and both Pearson and Spearman correlations revealed weak
relationships, with values close to zero. The ICC indicated moderate reliability in quanti-
Academic Editor: Will W. K. Ma
tative measurements. Overall, these results suggest that the AI model requires continuous
Received: 11 February 2025 updates to improve its accuracy and effectiveness. Future work should focus on expand-
Revised: 19 March 2025
ing the dataset and utilise continual learning methods to enhance the model’s ability to
Accepted: 20 March 2025
Published: 23 March 2025
learn from new data and improve its performance over time.
Citation: Verma, N., Getenet, S., Keywords: AI; video conferencing; online student engagement; teachers’ behaviours;
Dann, C., & Shaik, T. (2025).
teachers’ movements; design-based research
Evaluating an Artificial Intelligence
(AI) Model Designed for Education
to Identify Its Accuracy: Establishing
the Need for Continuous AI Model
Updates. Education Sciences, 15(4), 1. Introduction
403. https://doi.org/10.3390/
Over the past decade, there has been substantial growth in online education within
educsci15040403
higher education institutions. This growth is due to its flexibility, accessibility, and cost
Copyright: © 2025 by the authors.
efficiency (Castro & Tumibay, 2021; Dhawan, 2020). Further, COVID-19 has compelled
Licensee MDPI, Basel, Switzerland.
higher education institutes worldwide to transition to online learning (Xie et al., 2021).
This article is an open access article
distributed under the terms and con-
Due to this sudden change, teachers encounter notable challenges in adapting to online
ditions of the Creative Commons At- learning, with student engagement emerging as the most prominent challenge (Alenezi
tribution (CC BY) license (https://cre- et al., 2022). Studies have highlighted that fostering online student engagement is more
ativecommons.org/licenses/by/4.0/). complex than engaging students in traditional face-to-face learning (Gillett-Swan, 2017;
Hew, 2016). The potential of online learning and its trends brings forth new opportunities
but also poses various challenges (Liang & Chen, 2012).
Incorporating AI can assist in addressing these challenges by identifying and evalu-
ating discrepancies and offering suggestions for enhancing teaching effectiveness. AI
opens up new avenues for learning and teaching (Limna et al., 2022). AI technologies’
abilities to quickly analyse large datasets, recognise patterns, and make predictions sup-
port more personalised and effective learning experiences (Harry & Sayudin, 2023; Shaikh
et al., 2022; Tahiru, 2021). For instance, AI-powered systems can recommend personalised
learning paths, automate grading, and enhance educational resources (Nguyen, 2023).
However, a critical challenge lies in evaluating the accuracy of AI models, especially when
they are tasked with assessing complex human behaviours and movements, such as those
of teachers, aimed at encouraging student engagement. Despite its potential, there is still
much to learn about how accurately AI can interpret and predict the behaviours that en-
hance student engagement in online learning environments.
This study employed design-based research (DBR) to address these gaps by design-
ing an AI model to identify engagement-enhancing teacher behaviours and movements
during video conferences. During the initial phase of this DBR, the authors conducted a
systematic literature review to determine the characteristics and indicators of engaging
teaching videos Verma et al. (2023b). In the second phase, the authors, with the assistance
of an AI expert, trained an AI model to replace the manual annotation of teaching videos
based on teachers’ behaviours and movements (Verma et al., 2023a), which expedites the
process as manual annotation was identified as time-consuming (Beaver & Mueen, 2022).
The identified characteristics and indicators were then applied to train the AI model using
deep learning as an AI methodology. The current phase focuses on evaluating the AI
model to ensure its accuracy and determine whether continuous AI model updates are
necessary. Specifically, this study seeks to address the following research questions:
“How accurately can an AI model generate a report for characteristics and indi-
cators of engaging teaching videos based on teachers’ behaviours and move-
ments?” (RQ1)
“Why is it important to continuously update the AI model designed to enhance
online learning and teaching?” (RQ2)
By addressing these questions, this research aims to contribute to the ongoing effort to
accurately and sustainably integrate AI into online learning.
2. Background
This section consists of three subsections. Section 2.1 presents the three distinct
phases of the DBR, with a special focus on the current phase. Section 2.2 explores existing
studies on evaluation methods in the field of education. Finally, Section 2.3 delves into
studies that discuss evaluation methods within AI. Each section provides valuable in-
sights and analysis into these important topics, highlighting their significance and impli-
cations in their respective domains.
categorisation of these indicators into the 11 main characteristics are backed by the signif-
icant findings from the reviewed studies and research concerning online student engage-
ment. These characteristics were organised into three overarching domains: Teachers’ be-
haviours, movements, and use of technology Verma et al. (2023b). Appendix A.1 illus-
trates the main theme, characteristics, and indicators of engaging teaching videos.
Researchers have demonstrated significant interest in examining the influence of
teachers’ behaviours and movements on online student engagement (Cents-Boonstra et
al., 2021; J. Ma et al., 2015). Verma et al. (2023b) strongly believe that the characteristics
and indicators outlined in Appendix A.1 can be used as a benchmark for improving teach-
ers’ performance in online learning. Educational institutions can implement these indica-
tors and characteristics of engaging teaching videos to enhance and regulate online teach-
ing practices. Educational institutions worldwide can use this information to develop and
offer training for teachers aimed at refining their skills in creating teaching videos that
effectively boost online student engagement. However, identifying these engaging char-
acteristics and indicators within recorded lecture videos requires human participation
(Verma et al., 2023a). This manual identification and analysis process demands a signifi-
cant amount of time and resources (Beaver & Mueen, 2022). Additionally, this approach
may introduce human bias into the analysis. Therefore, in order to mitigate human bias
and maintain efficiency in identifying engaging teaching videos, the authors collaborated
with an AI expert to develop an AI model in phase 2. This tool generates a report on the
characteristics and indicators of engaging teaching videos (Verma et al., 2023a).
In the second phase, the educational experts annotated 25 recorded lecture videos.
The recorded lecture videos were presented to higher education students by lecturers
from a university in Australia. The videos encompass a range of fields, including law,
business, health, education, arts, and sciences, with an average length of 01:28:37 (Verma
et al., 2023a). There were 13 female and 12 male speakers featured in the videos, and the
authors secured ethical approval from the local university under the ethics approval num-
ber H20REA185. The manual annotation of these videos was performed individually us-
ing the Visual Geometry Group (VGG) Image Annotator (VIA) (Version 3) tool accessible
from https://www.robots.ox.ac.uk/~vgg/software/via/app/via_video_annotator.html (ac-
cessed on 11 January 2024). The manual annotation was carried out at the indicator level.
Through the manual annotation of 25 recorded lecture videos, the authors identified 7
characteristics and 15 descriptive indicators, as detailed in Table 1. Based on the outcomes
of this manual annotation, the AI expert assisted the authors during the development and
training of an AI model designed to identify the characteristics and indicators of engaging
teaching videos each time a video is processed.
Table 1. Characteristics and indicators identified in manual annotation (Verma et al., 2023a, p. 7).
Characteristics Indicators
• Facial expressions
Using Nonverbal Cues • Eye contact
• Appropriate body language
The engaging characteristics and indicators identified through manual video anno-
tation were utilised to train prototype 1. Recognising challenges like misleading metrics
and class imbalance, the model underwent refinement in prototype 2 by implementing
the oversampling technique. By implementing the oversampling technique, the model
was further improvised and demonstrated promising results, achieving an average preci-
sion, recall, F1-score, and balanced accuracy of 68%, 75%, 73%, and 79%, respectively, in
categorising the annotated videos at the indicator level (Verma et al., 2023a).
The developed model has the potential to support higher education institutions in
establishing moderation in lecture delivery. Moreover, it can significantly influence teach-
ing and learning by providing teachers with reports on their technology utilisation effec-
tiveness and identifying engagement-enhancing behaviours and movements present or
lacking during their lecture delivery. To ensure the AI model’s effectiveness and accuracy
in generating reports, the current study evaluates its performance using a range of met-
rics.
incorporate into quantitative tools during the developmental stage, as Sandelowski (2000)
suggested.
Chiu (2021) applied questionnaires in their study and adopted a quantitative analysis
method to evaluate the model they provided, where they leveraged digital tools to fulfil
the requirements of competence, relatedness, and autonomy, leading to active student en-
gagement in online learning. A questionnaire serves as a methodical approach for gather-
ing primary quantitative data in the literature. It typically consists of a sequence of written
inquiries to which respondents are required to provide responses (Bell, 1999).
Lee et al. (2019) incorporated expert opinions and conducted reliability and validity
analyses to ensure the accuracy and consistency of the model they proposed to enhance
student engagement in e-learning environments. Expert opinion refers to a judgment by
an individual with superior knowledge in a specific domain. It encompasses two key com-
ponents: expertise and domain specificity (Pingenot & Shanteau, 2009).
3. Methods
The authors utilised a DBR approach to develop an AI model that generates reports
on teachers’ behaviours and movements whenever it processes a recorded lecture video.
The DBR methodology has gained recognition in educational research, with many re-
searchers highlighting its ability to support the development of practical research pro-
cesses (Tinoca et al., 2022). Following the principles of the DBR methodology, this study
has unfolded in three distinct phases. The phases of the DBR process are summarised in
Figure 1.
Phase 1, systematic literature review: This phase involves a systematic review of the
existing literature to identify the characteristics and indicators of engaging teaching vid-
eos. By analysing previous research, a foundational understanding of what constitutes
effective teacher behaviours and movements in online teaching environments is estab-
lished. In this study, the authors identified 47 indicators and 11 characteristics categorised
into three main themes (see Appendix A). These identified indicators then guided the de-
velopment of the AI model in subsequent phases.
Phase 2, designing an AI model, involves video annotation to create an AI model
capable of analysing the characteristics and indicators identified in Phase I, to recognise
and evaluate teachers’ engagement-enhancing behaviours and movements in recorded
lecture videos using Zoom. The model was designed through two prototypes.
AI process
The authors developed a deep learning model to learn a teacher’s movements in a
recording with the support of an AI expert. This is achieved by recording the temporal
coordinates extracted from the tool’s manual video annotation. Temporal coordinates are
markers in the video timeline that help identify specific points in time. Selected lecture
videos were split based on these coordinates, and we transformed them into a stack of
image frames. The pre-processed frames were then labelled with corresponding teaching
indicators, and we prepared the data model for training. Next, the data were split into two
sets—training and testing—for model training and evaluation. An AI expert fed the train-
ing set to the convolutional neural network (CNN) model to learn the actions in the image
frames and their corresponding labels. Finally, the test set was used to evaluate the per-
formance of the CNN model.
Data pre-processing
Educ. Sci. 2025, 15, 403 7 of 22
During the data pre-processing step, the AI expert captured the temporal coordinates
provided by the video annotation tool. For example, suppose a lecture recording dis-
played the teaching indicator “Clear and concise explanation of information” at the tem-
poral coordinates (3051.315, 3053.256). In that case, the recorded lecture was divided into
video segments highlighting and extracting the teaching indicator. Then, each video was
split into segments of image frames and annotated each frame with the “Clear and concise
explanation of information” teaching indicator. These annotated image frames are repre-
sented as 2D matrices and serve as inputs for the convolution layer of the deep learning
model, as described in the subsequent subsection.
Deep learning model
The AI expert developed the CNN model as a deep learning approach for classifying
two-dimensional (2D) data images. The CNN model offers the advantage of reducing the
high dimensionality of images while preserving their information. Figure A2 illustrates
the learning process of the CNN model. First, the input image frames, pre-processed in
the previous step, are passed to a two-dimensional (2D) convolution layer, which uses a
set of filters to divide the image frame into smaller sub-images and analyse them individ-
ually. The convolution layer’s output is then passed to the pooling layer, which estimates
the maximum value for a feature set and creates a down-sampled group feature. The
pooled features can be flattened into a 2D array and then processed in the output layer of
the CNN model. The output layer provides a probability for each label classification,
which can be optimised using a threshold value to classify the features into a label.
As shown in Figure 1, the present study, Phase 3, focuses on the third phase of this
DBR, where authors have evaluated the AI model to ensure its accuracy and determine
whether continuous updates are required. The authors have used multiple statistical
methods to ensure the model’s accuracy. As part of the evaluation process, the model
processed two recorded lecture videos and then generated results, identifying indicators
of engaging teaching videos. Meanwhile, human experts who are well-versed in the do-
main independently analysed the same set of videos and provided their findings. The AI
model was evaluated using multiple statistical methods to identify the statistical agree-
ment and consistency between the findings of an AI model and two human experts in
evaluating specific segments of video data.
3.2.2. AI Reports
The AI model employed a deep learning model known as a convolutional neural
network (CNN) to process the same set of recorded lecture videos. Its main goal was to
identify the teachers’ engagement-enhancing behaviours and movements based on the
characteristics and indicators it had been trained with, similar to what the human experts
utilised for manual annotation. By examining visual cues and patterns, the model gener-
ated detailed reports highlighting the teachers’ behaviours and movements that enhance
student engagement.
4. Results
Tables 2 and 3 serve as invaluable resources, offering a clear outline of the analyses
conducted on each video and facilitating a deeper understanding of the comparative eval-
uations undertaken by both human experts and the AI model. Table 4 presents the statis-
tical agreement and consistency analysis between the AI model and experts evaluating
video 1 and video 2 data. The combined analysis results are discussed in detail, pointing
out the findings for each statistical method used.
Table 4. Statistical agreement and consistency analysis between the AI tool and experts.
−0.02, reflecting a weak negative linear relationship. Similarly, the Spearman correlation
coefficients showed a weak positive rank-order correlation of 0.09 with Expert 1 and a
weak negative rank-order correlation of −0.10 with Expert 2. These results suggest that the
AI model’s findings have a minimal linear or monotonic relationship with the expert as-
sessments.
The statistical analyses reveal that the AI model’s assessments exhibit slight to mod-
erate agreement and consistency with those of the human experts. While there is some
level of alignment, the relatively low agreement metrics indicate that there is significant
room for improvement in the AI model’s performance. Enhancing the AI model, perhaps
through additional training with a more diverse dataset or by refining its algorithms,
could potentially increase its reliability and consistency with expert evaluations. This
would be crucial for ensuring the AI tool’s effectiveness and accuracy in real-world appli-
cations.
5. Discussion
Researchers (e.g., Apicella et al., 2022; Giang et al., 2022; Shekhar et al., 2018) have
developed various evaluation methods such as interviews, case studies, mixed-methods
approaches, and questionnaires to validate instruments and ensure their effectiveness in
education. However, existing research in education lacks evaluation methods specifically
designed for measuring online student engagement using AI models (Heeg & Avraami-
dou, 2023; Huang et al., 2023). Therefore, the authors employed multiple statistical meth-
ods to measure the developed AI model’s accuracy and identify whether it requires con-
tinuous model updates.
revealed weak relationships: 0.09 with Expert 1 and −0.02 with Expert 2 for Pearson, and
0.09 and −0.10 for Spearman, respectively. These findings highlight significant room for
improvement in the AI model’s performance, suggesting that a further update is needed
to enhance its accuracy and consistency with expert evaluations.
In relation to the RQ2: Why is it important to continuously update the AI model de-
signed to enhance online learning and teaching? The evaluation findings indicate only a
slight to moderate alignment of the AI model’s performance outcome with the experts’
analysis results, emphasising the need for further improvement through continuous
model updates. Apart from the findings of this study, various factors support the im-
portance of continuously updating AI models. AI models are trained and rely on historical
data, which may become outdated as the data environment evolves. Such changes can
significantly impact the AI model’s performance, making regular updates necessary to
keep the model’s performance from declining (Li et al., 2023). Roshanaei et al. (2024) de-
scribe regular updates and patches for AI models as the process of refreshing them to
address any weaknesses in their design or data handling processes. AI models need to be
regularly updated to keep up with new information (Ocaña & Opdahl, 2023). Pianykh et
al. (2020) recommend the incorporation of feedback from match results and adjusting al-
gorithms as part of the continuous training and updating of AI models to improve their
predictive accuracy over time. Further, model updates can be influenced by other factors
such as the availability of new or higher-quality training data, user feedback, learning
algorithm advancements, and the need to ensure fairness in the model (X. Wang & Yin,
2023). Murtaza et al. (2022) highlight that continuously updating AI learning models with
new training data can enhance the learning experience. Therefore, keeping models up to
date ensures that AI models can continuously offer relevant, effective, and fair support in
online learning environments.
5.2. Implications
This study holds significant implications for the use of AI models in education.
Firstly, this three-phase research project provides the characteristics and indicators of en-
gaging teaching videos that can improve online student engagement. These characteristics
and indicators can help teachers and educational institutions enhance their pedagogical
approaches.
Secondly, this study provides a procedure to train AI models for education. Further,
by creating an AI model in phase 2, this research proves that AI can be used to create
models and tools to replace the manual identification process. This can avoid challenges
such as time consumption, cost, and potential human bias. According to De Silva et al.
(2024), one of the multifaceted benefits of AI is its ability to automate processes, leading
to increased efficiency in terms of both time and cost.
Thirdly, this study highlights the importance of model monitoring and validation.
Monitoring and validating AI systems to ensure accuracy and fairness are crucial. Al-
doseri et al. (2023) highlighted that inaccurate, biased, or irrelevant outcomes derived
from low-quality data can have adverse effects on decision-making processes grounded
in AI outputs, emphasising the importance of validation to enable AI systems to generate
dependable and valuable outcomes. Thus, this study employed various metrics to guar-
antee the reliability of the evaluation results for the developed AI model, assessing its
accuracy and identifying the importance of continuous AI model updates. This establishes
the need for a policy that requires educational institutions to regularly enhance and up-
date AI models to maintain accuracy and reliability and ensure the models remain rele-
vant.
Moreover, if the AI model accurately identifies these characteristics and indicators of
engaging teaching videos effectively, it can provide teachers with significant support in
Educ. Sci. 2025, 15, 403 13 of 22
various aspects, such as saving time, enhancing learning, and reinforcing professional de-
velopment. Regarding professional growth and continual improvement, AI-generated re-
ports are instrumental in aiding teachers in recognising both the strong points and areas
needing improvement in their lecture delivery concerning engagement. Similarly, pro-
cessing engaging recorded lecture videos using the AI model provides teachers with val-
uable insights into what resonates most effectively with their students. This empowers
them to make well-informed decisions for future learning experiences, ultimately result-
ing in improved teaching and learning outcomes. Further, this research also provides a
manual annotation procedure that can assist AI engineers in developing similar AI mod-
els.
7. Conclusions
As detailed in the explanation of findings, the AI model evaluation involved various
statistical methods used to perform a statistical agreement and consistency analysis, com-
paring the AI model’s findings with those of human experts. The results showed relatively
low agreement between the AI model’s ability to identify the characteristics and indicators
of engaging teaching videos and the experts’ analysis. While the AI model shows poten-
tial, the results highlight significant room for improvement, suggesting further updates
are needed to improve the model’s accuracy and achieve strong to excellent alignment
with expert evaluations.
Educ. Sci. 2025, 15, 403 14 of 22
Funding: This research did not receive any specific grant from public, commercial, or not-for-profit
funding agencies.
Institutional Review Board Statement: This research obtained ethics approval from the local uni-
versity under the ethics approval number H20REA185, approval date 19 February 2021.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the
study.
Data Availability Statement: Please contact the authors for a data request.
Conflicts of Interest: The authors declared no potential conflicts of interest with respect to the re-
search, authorship, and/or publication of this article.
Abbreviations
Abbreviation Definition
AI Artificial Intelligence
CNN Convoluted Neural Network
COVID-19 Coronavirus Disease 2019
DBR Design-based Research
VIA VGG Image Annotator
Appendix A
Appendix A.1
Main theme, characteristics, and indicators of engaging teaching videos (Verma et
al., 2023a, p. 11).
• Motivating students
Displaying Enthusiasm
• Displaying positive emotion
• Facial expressions
• Gestures
• Eye gazes
Teachers’ Movements Using Nonverbal Cues • Silence
• Eye contact
• Physical proximity
• Appropriate body language
Step 1: Creating a new project: Open the VIA annotation tool by clicking the link
above. Add the project name on the top left-hand side (refer to Figure 1). The project name
should be the same as the recorded lecture name.
Step 2: Adding a video file: The second step is to add a video by clicking the plus
icon (refer to Figure A1). Select the video to be annotated from the desktop or cloud stor-
age.
Step 3: Define the attributes: Once the video is added, define the attributes by click-
ing on 1 (refer to Figure A2). In this step, two attributes have been created by typing the
attribute name in 2 (refer to Figure A2) and clicking Create. In this project, the first attrib-
ute was created to identify the engaging teaching video indicators and the second to high-
light the presenter’s location in the video.
While defining the attributes, the following information was inserted (refer to Figure
A3):
Attribute 1: The name of the first attribute is “Engaging teaching video indicators”.
The anchor is set to “Temporal Segment in Video or Audio” as researchers identified the
indicators in small video segments. The text function is selected for the input type(refer to
Figure A4).
Attribute 2: The name of the second attribute is “Presenter location”. The attribute is
created to signal the presenter’s location in the video. The anchor is set to “Spatial region
in a video frame” as an area is highlighted to indicate the presenter’s location. The input
type is set as Select. In the options section, the researchers have typed “presenter” to
Name = Presenter location
Anchor = Spatial region in a video frame
Input Type = Select
Options = *Presenter (Note: if there are multiple presenters in a video, we can add
*presenter 1, presenter 2)
Indicators Description
Teachers to engage students in discussions or debates to attract
1. Encouraging students’ participation in discussion
their interest and motivate a deeper understanding
2. Encouraging students to share their knowledge Teachers to ask for students’ participation in active learning
and ideas methods by sharing their perceptions, knowledge, and ideas
Teachers to create a safe and open environment that allows stu-
3. Encouraging students to ask questions dents to ask their questions, to enhance the student interaction
experience
Teachers to create opportunities for students to interact with
4. Encouraging collaborative learning activities
each other through group activities or collaborative work
Teachers to construct a welcoming and efficient online learning
environment by fostering regular and meaningful communica-
5. Encouraging meaningful interaction
tion with students and providing meaningful answers to stu-
dents’ enquiries
Teachers to provide students with various learning resources,
6. Providing learning resources
videos, etc., to increase students’ active participation
Teachers to be clear and detailed in communicating the instruc-
7. Giving clear instructions tions, expectations, roles, and responsibilities, to show commit-
ment to meeting the course goals
Teachers to clearly outline and communicate the topics and in-
8. Outlining the learning objectives
structions to increase student engagement in online learning
Teachers to read and respond to perceived restlessness by us-
9. Using appropriate changes in tone of voice
ing appropriate changes in tone of voice or changes in direction
Educ. Sci. 2025, 15, 403 18 of 22
Step 6: Identifying the indicators from the video: Manual annotation is performed
after defining the attributes and indicating the presenter’s location. In this process, the
video is played, and indicators are identified in small segments (refer to arrows in Figure
A7). To start the temporal segment, click “a”, and to stop it, click “Shift” + “a”.
Step 7: Saving and Exporting the Project for Machine Learning: Once the annota-
tion is complete, save the project by clicking on 1 and selecting the project’s location. Sim-
ilarly, click on 2 to export the project(refer to Figure A8).
Educ. Sci. 2025, 15, 403 20 of 22
References
Aldoseri, A., Al-Khalifa, K. N., & Hamouda, A. M. (2023). Re-thinking data strategy and integration for Artificial Intelligence: Con-
cepts, opportunities, and challenges. Applied Sciences, 13(12), 7082. https://doi.org/10.3390/app13127082.
Alenezi, E., Alfadley, A. A., Alenezi, D. F., & Alenezi, Y. H. (2022). The sudden shift to distance learning: Challenges facing teachers.
Journal of Education and Learning, 11(3), 14. https://doi.org/10.5539/jel.v11n3p14.
Apicella, A., Arpaïa, P., Frosolone, M., Improta, G., Moccaldi, N., & Pollastro, A. (2022). EEG-based measurement system for moni-
toring student engagement in learning 4.0. Scientific Reports, 12(1), 5857. https://doi.org/10.1038/s41598-022-09578-y.
Ashwin, T. S., & Guddeti, R. M. R. (2019). Automatic detection of students’ affective states in classroom environment using hybrid
convolutional neural networks. Education and Information Technologies, 25(2), 1387–1415. https://doi.org/10.1007/s10639-019-
10004-6.
Beaver, I., & Mueen, A. (2022). On the care and feeding of virtual assistants: Automating conversation review with AI. AI Magazine,
42(4), 29–42. https://doi.org/10.1609/aaai.12024.
Behera, A., Matthew, P., Keidel, A., Vangorp, P., Fang, H., & Canning, S. (2020). Associating facial expressions and upper-body
gestures with learning tasks for enhancing intelligent tutoring systems. International Journal of Artificial Intelligence in Education,
30(2), 236–270. https://doi.org/10.1007/s40593-020-00195-2.
Bell, J. (1999). Doing your research project: A guide for first-time researchers in education and social science (3rd ed.). Open University Press.
Castro, M. D. B., & Tumibay, G. M. (2021). A literature review: Efficacy of online learning courses for higher education institution
using meta-analysis. Education and Information Technologies, 26, 1367–1385. https://doi.org/10.1007/s10639-019-10027-z.
Cents-Boonstra, M., Lichtwarck-Aschoff, A., Lara, M. M., & Denessen, E. (2021). Patterns of motivating teaching behaviour and stu-
dent engagement: A microanalytic approach. European Journal of Psychology of Education, 37, 227–255.
https://doi.org/10.1007/s10212-021-00543-3.
Chiu, T. K. F. (2021). Applying the self-determination theory (SDT) to explain student engagement in online learning during the
COVID-19 pandemic. Journal of Research on Technology in Education, 54(Suppl. 1), S14–S30.
https://doi.org/10.1080/15391523.2021.1891998.
De Silva, D., Kaynak, O., El-Ayoubi, M., Mills, N., Alahakoon, D., & Manic, M. (2024). Opportunities and challenges of Generative
artificial intelligence: research, education, industry engagement, and social impact. IEEE Industrial Electronics Magazine, 2–17.
https://doi.org/10.1109/mie.2024.3382962.
Dhawan, S. (2020). Online learning: A panacea in the time of COVID-19 crisis. Journal of Educational Technology Systems, 49(1), 5–22.
https://doi.org/10.1177/0047239520934018.
Giang, T. T. T., Andre, J., & Lan, H. H. (2022). Student engagement: Validating a model to unify in-class and out-of-class Contexts.
Journal of Education and Learning, 8(4), 1–14. https://doi.org/10.1177/21582440221140334.
Educ. Sci. 2025, 15, 403 21 of 22
Gillett-Swan, J. (2017). The challenges of online learning: Supporting and engaging the isolated learner. Journal of Learning Design,
10(1), 20–30. https://doi.org/10.5204/jld.v9i3.293.
Harry, A., & Sayudin, S. (2023). Role of AI in education. Interdiciplinary Journal and Humanity (Injurity), 2(3), 260–268.
https://doi.org/10.58631/injurity.v2i3.52.
Heale, R., & Twycross, A. (2018). What is a case study? Evidence-Based Nursing, 21(1), 7–8. https://doi.org/10.1136/eb-2017-102845.
Heeg, D. M., & Avraamidou, L. (2023). The use of Artificial intelligence in school science: A systematic literature review. Educational
Media International, 60(2), 125–150. https://doi.org/10.1080/09523987.2023.2264990.
Hew, K. F. (2016). Promoting engagement in online courses: What strategies can we learn from three highly rated MOOCS. British
Journal of Educational Technology, 47(2), 320–341. https://doi.org/10.1111/bjet.12235.
Huang, A. Y. Q., Lu, O. H. T., & Yang, S. J. H. (2023). Effects of artificial Intelligence–Enabled personalised recommendations on
learners’ learning engagement, motivation, and outcomes in a flipped classroom. Computers & Education, 194, 104684.
https://doi.org/10.1016/j.compedu.2022.104684.
Kvale, S. (1996). Interview views: An Introduction to qualitative research interviewing. Sage Publications.
Lee, J., Song, H., & Hong, A. J. (2019). Exploring factors, and indicators for measuring students’ sustainable engagement in e-Learn-
ing. Sustainability, 11(4), 985. https://doi.org/10.3390/su11040985.
Li, J., Lin, F., Yang, L., & Huang, D. (2023). AI service placement for Multi-Access Edge Intelligence systems in 6G. IEEE Transactions
on Network Science and Engineering, 10(3), 1405–1416. https://doi.org/10.1109/tnse.2022.3228815.
Liang, R., & Chen, D. T. V. (2012). Online learning: Trends, potential and challenges. Creative Education, 3(8), 1332.
https://doi.org/10.4236/ce.2012.38195.
Limna, P., Jakwatanatham, S., Siripipattanakul, S., Kaewpuang, P., & Sriboonruang, P. (2022). A review of artificial intelligence (AI)
in education during the digital era. Advance Knowledge for Executives, 1(1), 1–9. Available online: https://ssrn.com/ab-
stract=4160798 (accessed on 5 January 2024).
Ma, J., Han, X., Yang, J., & Cheng, J. (2015). Examining the necessary condition for engagement in an online learning environment
based on learning analytics approach: The role of the instructor. The Internet and Higher Education, 24, 26–34.
https://doi.org/10.1016/j.iheduc.2014.09.005.
Ma, X., Xu, M., Dong, Y., & Sun, Z. (2021). Automatic student engagement in online learning environment based on Neural Turing
Machine. International Journal of Information and Education Technology, 11(3), 107–111. https://doi.org/10.18178/ijiet.2021.11.3.1497.
Murtaza, M., Ahmed, Y., Shamsi, J. A., Sherwani, F., & Usman, M. (2022). AI-Based personalised E-Learning systems: Issues, chal-
lenges, and solutions. IEEE Access, 10, 81323–81342. https://doi.org/10.1109/access.2022.3193938.
Nguyen, N. D. (2023). Exploring the role of AI in education. London Journal of Social Sciences, 6, 84–95.
https://doi.org/10.31039/ljss.2023.6.108.
Nikoloutsopoulos, S., Koutsopoulos, I., & Titsias, M. K. (2024, May 5–8). Kullback-Leibler reservoir sampling for fairness in continual
learning. 2024 IEEE International Conference on Machine Learning for Communication and Networking (ICMLCN) (pp. 460–
466), Stockholm, Sweden. https://doi.org/10.1109/icmlcn59089.2024.10624806.
Ocaña, M. G., & Opdahl, A. L. (2023). A software reference architecture for journalistic knowledge platforms. Knowledge-Based Sys-
tems, 276, 110750. https://doi.org/10.1016/j.knosys.2023.110750.
Pianykh, O. S., Langs, G., Dewey, M., Enzmann, D. R., Herold, C. J., Schoenberg, S. O., & Brink, J. A. (2020). Continuous Learning AI
in radiology: Implementation principles and early applications. Radiology, 297(1), 6–14. https://doi.org/10.1148/ra-
diol.2020200038.
Pingenot, A., & Shanteau, J. (2009). Expert opinion. In M. W. Kattan (Ed.), Encyclopedia of medical decision making. Sage Publications,
Inc. Available online: https://www.researchgate.net/publication/263471207_Expert_Opinion (accessed on 2 January 2024).
Roshanaei, M., Khan, M. R., & Sylvester, N. N. (2024). Enhancing Cybersecurity through AI and ML: Strategies, Challenges, and
Future Directions. Journal of Information Security, 15(3), 320–339. https://doi.org/10.4236/jis.2024.153019.
Sandelowski, M. (2000). Combining qualitative and quantitative sampling, data collection, and analysis techniques. Research in Nurs-
ing & Health, 23(3), 246–255. https://doi.org/10.1002/1098-240X(200006)23:3<246::AID-NUR9>3.0.CO;2-H.
Shaikh, A. A., Kumar, A., Jani, K., Mitra, S., García-Tadeo, D. A., & Devarajan, A. (2022). The role of Machine Learning and Artificial
Intelligence for making a digital classroom and its sustainable impact on education during COVID-19. Materials Today Proceed-
ings, 56, 3211–3215. https://doi.org/10.1016/j.matpr.2021.09.368.
Shekhar, P., Prince, M. J., Finelli, C. J., DeMonbrun, M., & Waters, C. (2018). Integrating quantitative and qualitative research methods
to examine student resistance to active learning. European Journal of Engineering Education, 44(1–2), 6–18.
https://doi.org/10.1080/03043797.2018.1438988.
Educ. Sci. 2025, 15, 403 22 of 22
Tahiru, F. (2021). AI in education. Journal of Cases on Information Technology, 23(1), 1–20. https://doi.org/10.4018/jcit.2021010101.
Tinoca, L., Piedade, J., Santos, S., Pedro, A., & Gomes, S. (2022). Design-Based research in the educational field: A Systematic literature
review. Education Sciences, 12(6), 410. https://doi.org/10.3390/educsci12060410.
Turner, D. J. (2010). Qualitative interview design: A practical guide for novice investigators. The Qualitative Report, 15(3), 754–760.
https://doi.org/10.46743/2160-3715/2010.1178.
Verma, N., Getenet, S., Dann, C., & Shaik, T. (2023a). Characteristics of engaging teaching videos in higher education: a systematic
literature review of teachers’ behaviours and movements in video conferencing. Research and Practice in Technology Enhanced
Learning, 18, 040. https://doi.org/10.58459/rptel.2023.18040
Verma, N., Getenet, S., Dann, C., & Shaik, T. (2023b). Designing an artificial intelligence tool to understand student engagement based
on teacher’s behaviours and movements in video conferencing. Computers & Education: Artificial Intelligence, 5, 100187.
https://doi.org/10.1016/j.caeai.2023.100187
Wang, C., Yang, Z., Li, Z. S., Damian, D., & Lo, D. (2024). Quality assurance for Artificial intelligence: A study of industrial concerns,
challenges and best practices. arXiv, arxiv:2402.16391. https://doi.org/10.48550/arxiv.2402.16391.
Wang, X., & Yin, M. (2023, April 23–28). Watch out for updates: Understanding the effects of model explanation updates in ai-assisted decision
making. 2023 CHI Conference on Human Factors in Computing Systems (pp. 1–19), Hamburg, Germany.
https://doi.org/10.1145/3544548.3581366.
Weng, X., Ng, O.-L., & Chiu, T. K. F. (2023). Competency development of pre-service teachers during video-based learning: A sys-
tematic literature review and meta-analysis. Computers & Education, 199, 104790. https://doi.org/10.1016/j.compedu.2023.104790.
Xie, J., A, G., Rice, M. F., & Griswold, D. E. (2021). Instructional designers’ shifting thinking about supporting teaching during and
post-COVID-19. Distance Education, 42, 1–21. https://doi.org/10.1080/01587919.2021.1956305
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual au-
thor(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.