Beyond ChatGPT: Evaluating the Pedagogical Effectiveness of Large Language Models in Technology-Enhanced Learning Environments

Authors

  • Rian Setiawan Universitas Pendidikan Ganesha, Bali, Indonesia
  • Ni Komang Candrawati Universitas Pendidikan Ganesha, Singaraja, Indonesia
  • Anindya Aishwarya Universitas Pendidikan Ganesha, Singaraja, Indonesia

DOI:

https://doi.org/10.63876/jets.v1i2.45

Keywords:

Large Language Models, Technology-Enhanced Learning, Pedagogical Effectiveness, AI in Education, ChatGPT

Abstract

The rapid adoption of Large Language Models (LLMs), particularly ChatGPT, has transformed technology-enhanced learning environments by enabling personalized, interactive, and scalable educational support. However, limited empirical evidence exists regarding their true pedagogical effectiveness beyond surface-level engagement. This study investigates the instructional value of LLMs in fostering student learning outcomes, critical thinking, and self-regulated learning across diverse educational contexts. A mixed-methods approach was employed, combining quantitative analysis of student performance metrics with qualitative insights from learner and instructor feedback. The experimental design compared LLM-assisted learning with traditional digital learning tools across multiple cohorts in higher education settings. Results indicate that LLM integration significantly improves conceptual understanding and learner engagement, particularly in formative learning activities and feedback-driven tasks. Nonetheless, findings also reveal challenges related to over-reliance, reduced cognitive effort, and concerns regarding content accuracy and academic integrity. The study further identifies key pedagogical factors influencing effectiveness, including prompt design, instructional scaffolding, and educator mediation. This research contributes to the emerging discourse on AI in education by providing a comprehensive evaluation framework for LLM-based learning systems. It highlights the need for balanced human-AI collaboration to maximize educational benefits while mitigating risks. The findings offer practical implications for educators, policymakers, and system designers aiming to integrate LLMs into sustainable and pedagogically sound learning ecosystems.

References

[1] J. K. Kim, M. Chua, M. Rickard, and A. Lorenzo, “ChatGPT and large language model (LLM) chatbots: The current state of acceptability and a proposal for guidelines on utilization in academic medicine,” Journal of Pediatric Urology, vol. 19, no. 5, pp. 598–604, Oct. 2023, doi: https://doi.org/10.1016/j.jpurol.2023.05.018.

[2] M.-L. Tsai, C. W. Ong, and C.-L. Chen, “Exploring the use of large language models (LLMs) in chemical engineering education: Building core course problem models with Chat-GPT,” Education for Chemical Engineers, vol. 44, pp. 71–95, Jul. 2023, doi: https://doi.org/10.1016/j.ece.2023.05.001.

[3] H. Rathi, A. Malik, D. C. Behera, and G. Kamboj, “P21 A Comparative Analysis of Large Language Models (LLM) Utilised in Systematic Literature Review,” Value in Health, vol. 26, no. 12, p. S6, Dec. 2023, doi: https://doi.org/10.1016/j.jval.2023.09.030.

[4] A. Maeda-Minami et al., “Development of a novel drug information provision system for Kampo medicine using natural language processing technology,” BMC Med Inform Decis Mak, vol. 23, no. 1, p. 119, Jul. 2023, doi: https://doi.org/10.1186/s12911-023-02230-3.

[5] J. Chai, H. Zeng, A. Li, and E. W. T. Ngai, “Deep learning in computer vision: A critical review of emerging techniques and application scenarios,” Machine Learning with Applications, vol. 6, p. 100134, Dec. 2021, doi: https://doi.org/10.1016/j.mlwa.2021.100134.

[6] Y. Jiang, X. Yang, and T. Zheng, “Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots,” Computers in Human Behavior, vol. 138, p. 107485, Jan. 2023, doi: https://doi.org/10.1016/j.chb.2022.107485.

[7] C.-M. Chen, M.-C. Li, W.-C. Chang, and X.-X. Chen, “Developing a Topic Analysis Instant Feedback System to facilitate asynchronous online discussion effectiveness,” Computers & Education, vol. 163, p. 104095, Apr. 2021, doi: https://doi.org/10.1016/j.compedu.2020.104095.

[8] J. Krushnan and F. Schrödel, “Development of a Modern, Low Cost, Lab Scale Industry 4.0 Plant for Education*,” IFAC-PapersOnLine, vol. 55, no. 17, pp. 156–161, 2022, doi: https://doi.org/10.1016/j.ifacol.2022.09.273.

[9] Chenn-Jung Huang, Ming-Chou Liu, San-Shine Chu, and Chin-Lun Cheng, “Application of Machine Learning Techniques to Web-Based Intelligent Learning Diagnosis System,” in Fourth International Conference on Hybrid Intelligent Systems (HIS’04), Kitakyushu, Japan: IEEE, 2004, pp. 242–247. doi: https://doi.org/10.1109/ICHIS.2004.25.

[10] M. Ross, C. A. Graves, J. W. Campbell, and J. H. Kim, “Using Support Vector Machines to Classify Student Attentiveness for the Development of Personalized Learning Systems,” in 2013 12th International Conference on Machine Learning and Applications, Miami, FL, USA: IEEE, Dec. 2013, pp. 325–328. doi: https://doi.org/10.1109/ICMLA.2013.66.

[11] O. Jiménez, A. Jesús, and L. Wong, “Model for the Prediction of Dropout in Higher Education in Peru applying Machine Learning Algorithms: Random Forest, Decision Tree, Neural Network and Support Vector Machine,” in 2023 33rd Conference of Open Innovations Association (FRUCT), Zilina, Slovakia: IEEE, May 2023, pp. 116–124. doi: https://doi.org/10.23919/FRUCT58615.2023.10143068.

[12] R. Kande, V. Gohil, M. DeLorenzo, C. Chen, and J. Rajendran, “LLMs for Hardware Security: Boon or Bane?,” in 2024 IEEE 42nd VLSI Test Symposium (VTS), Tempe, AZ, USA: IEEE, Apr. 2024, pp. 1–4. doi: https://doi.org/10.1109/VTS60656.2024.10538871.

[13] H. K. Grønlien, T. E. Christoffersen, Ø. Ringstad, M. Andreassen, and R. G. Lugo, “A blended learning teaching strategy strengthens the nursing students’ performance and self-reported learning outcome achievement in an anatomy, physiology and biochemistry course – A quasi-experimental study,” Nurse Education in Practice, vol. 52, p. 103046, Mar. 2021, doi: https://doi.org/10.1016/j.nepr.2021.103046.

[14] Z. Y. Kong, V. S. K. Adi, J. G. Segovia-Hernández, and J. Sunarso, “Complementary role of large language models in educating undergraduate design of distillation column: Methodology development,” Digital Chemical Engineering, vol. 9, p. 100126, Dec. 2023, doi: https://doi.org/10.1016/j.dche.2023.100126.

[15] S. Mariam, K. F. Khawaja, M. N. Qaisar, and F. Ahmad, “Blended learning sustainability in business schools: Role of quality of online teaching and immersive learning experience,” The International Journal of Management Education, vol. 21, no. 2, p. 100776, Jul. 2023, doi: https://doi.org/10.1016/j.ijme.2023.100776.

[16] Y. Li et al., “Hybrid-LLM-GNN: integrating large language models and graph neural networks for enhanced materials property prediction,” Digital Discovery, vol. 4, no. 2, pp. 376–383, 2025, doi: https://doi.org/10.1039/D4DD00199K.

[17] J. Yang, H. B. Li, and D. Wei, “The impact of ChatGPT and LLMs on medical imaging stakeholders: Perspectives and use cases,” Meta-Radiology, vol. 1, no. 1, p. 100007, Jun. 2023, doi: https://doi.org/10.1016/j.metrad.2023.100007.

[18] S. Spallek, L. Birrell, S. Kershaw, E. K. Devine, and L. Thornton, “Can we use ChatGPT for Mental Health and Substance Use Education? Examining Its Quality and Potential Harms,” JMIR Med Educ, vol. 9, p. e51243, Nov. 2023, doi: https://doi.org/10.2196/51243.

[19] A. Kleebayoon and V. Wiwanitkit, “ChatGPT and large language model (LLM) chatbots: Correspondence,” Journal of Pediatric Urology, vol. 19, no. 5, pp. 605–606, Oct. 2023, doi: https://doi.org/10.1016/j.jpurol.2023.06.033.

[20] P. P. Ray, “A Sober Appraisal of Artificial Intelligence Systems, Particularly ChatGPT, in the Facets of Emergency Medicine,” Annals of Emergency Medicine, vol. 82, no. 6, pp. 766–767, Dec. 2023, doi: https://doi.org/10.1016/j.annemergmed.2023.05.025.

Downloads

Published

2024-05-03

How to Cite

Setiawan, R., Candrawati, N. K., & Aishwarya, A. (2024). Beyond ChatGPT: Evaluating the Pedagogical Effectiveness of Large Language Models in Technology-Enhanced Learning Environments. Journal of Educational Technology and Society, 1(2), 91–99. https://doi.org/10.63876/jets.v1i2.45

Issue

Section

Articles