AI VERSUS HUMAN GRADERS: ASSESSING THE ROLE OF LARGE LANGUAGE MODELS IN HIGHER EDUCATION

Authors

  • MAHLATSE RAGOLANE Research and Content Development, School of Excellence, Regent Business School (Honoris United Universities), Johannesburg, South Africa
  • SHAHIEM PATEL Regent Business School (Honoris United Universities), Johannesburg, South Africa
  • PRANISHA SALIKRAM School of Commerce and Management, Regent Business School (Honoris United Universities), Durban, South Africa.

DOI:

https://doi.org/10.22159/ijss.2024v13i1.52834

Keywords:

Artificial Intelligence, LLMs, ChatGPT, Higher Education, Assessment, AI Grading

Abstract

While artificial intelligence (AI) grading is seeing an increase in use and adoption, traditional educational practices are also forced to adapt and function together with AI, especially in assessment grading. In retrospect, human grading, on the other hand, has long been the cornerstone of educational assessment. Conventionally, educators have assessed student work based on established criteria, providing feedback intended to support learning and development. While human grading offers nuanced understanding and personalized feedback, it is also subject to limitations such as grading inconsistencies, biases, and significant time demands. This paper explores the role of large language models (LLMs), such as ChatGPT-3.5 and ChatGPT-4, in grading processes in higher education and compares their effectiveness with that of traditional human grading methods. The study uses both qualitative and quantitative methodologies, and the research extends across multiple academic programs and modules, providing a comprehensive assessment of how AI can complement or replace human graders. In study 1, we focused on (n=195) scripts in (n=3) modules and compared GPT 3.5, GPT 4, and human graders. Manually marked scripts exhibited an average of 24% mark difference. Subsequently, (n=20) scripts were assessed using GPT-4, which yielded a more precise evaluation. Total average of 4% difference in results. There were individual instances where marks were higher, but this could not naturally be a marker judgment. In Study 2, the results from the first study highlighted the need for a comprehensive memorandum; thus, we identified (n=4341), among which (n=3508) scripts were used. The study found that AI remains efficient when the memorandum is well-structured. It was also found that while AI excels in scalability, human graders excel in interpreting complex answers, evaluating creativity, and picking up plagiarism. In Study 3, we evaluated formative assessments in GPT 4 (statistics n=602, Business Statistics n=859 and Logistics Management n=522). The third study demonstrated that AI marking tools can effectively manage the demands of formative assessments, particularly in modules where the questions are objective and structured, such as Statistics and Logistics Management. The initial error in Statistics 102 highlighted the importance of a well-designed memorandum. The study concludes that AI tools can effectively reduce the burden on educators but should be integrated into a hybrid model in which human markers and AI systems work in tandem to achieve fairness, accuracy, and quality in assessments. This paper contributes to ongoing debates about the future of AI in education by emphasizing the importance of a well-structured memorandum and human discretion in achieving balanced and effective grading solutions.

References

Colonna, L. (2024). Teachers informed in the loop? An analysis of automatic assessment systems under Article 22 GDPR. International Data Privacy Law, 14(1), 3-18.

Defrijin, S., Mathijs, E., Gulinck, H., Lauwers, L. (2007). Facilitating and Evaluating Farmer Innovations Toward More Sustainable Energy and Material Flows: A Case Study in Flanders. 8th European IFSA Symposium, 6-10 July 2008, Clermont-Ferrand (France). Retrieved September 17, 2024, from https://www.researchgate. net/publication/228785130_facilitating_and_evaluating_farmer_ innovations_towards_more_sustainable_energy_and_material_flows_ case-study_in_flanders

Funda, V., & Piderit, R. (2024). A Review of the Application of Artificial Intelligence in South African Higher Education, 2024 Conference on Information Communications Technology and Society (ICTAS), Durban, South Africa (pp. 44-50).

GameDevNews (2023). The Evolution of AI Writing Models: From GPT-2 to the Future, Open AI and ChatGPT News, Linkedin. Retrieved from https://www.linkedin.com/pulse/evolution-ai-writing-models-fromgpt- 2-future-open-ai-gpt-news-95zlc Gobrecht, A., Tuma, F., Möller, M., Zöller, T., Zakhvatkin, M., Wuttig, A., Sommerfeldt, H., & Schütt, S. (2024). Beyond human subjectivity and error: A novel AI grading system.

Huriye, A. Z. (2023). The ethics of artificial intelligence: Examining the ethical considerations surrounding the development and use of AI. American Journal of Technology, 2(1), 37-44.

Kamalov, F., Santandreu Calonge D., & Gurrib, I. (2023). New era of artificial intelligence in education: Towards a sustainable multifaceted revolution. Sustainability, 15(16), 12451.

Kharbach, M. (2024). A Timeline of The Evolution of ChatGPT. Retrieved from https://www.educatorstechnology.com/2024/06/the-evolution-ofchatgpt. html

Khurana, D., Koli, A., Khatter, K., & Sing, S. (2023). Natural language processing: State of the art, current trends and challenges. Multimedia Tools and Applications, 82, 3713-3744

Kortemeyer, G., Nöhl, J., & Onishchuk., D. (2024). Grading assistance for a handwritten thermodynamics exam using artificial intelligence: An exploratory study.

Kurzhals, H. D. (2022). Challenges and Approaches Related to AI-Driven Grading of Open Exam Questions in Higher Education: Human in the Loop, Computer Science, Education. Retrieved from https://essay. utwente.nl/90957/1/kurzhals_ba_bms.pdf

Minaee, S., Mikolov, T., Chenaghlu, N., Socher, M., Amatriain, X., & Gao., J. (2024). Large Language Models: A Survey. Opesemowo, O., & Adekomaya, V. (2024). Harnessing artificial intelligence for advancing sustainable development goals in South Africa’s higher education system: A qualitative study. International Journal of Learning, Teaching and Educational Research, 23, 67-86.

Patel, S., & Ragolane, M. (2024). Implementing artificial intelligence in higher education institutions in South Africa: Opportunities and challenges. Technium Education and Humanities, 9, 51-65.

Ragolane, M., & Patel, S. (2024). Transforming Educ-AI-tion in South Africa: Can AI-driven grading transform the future of higher education? Journal of Education and Teaching Methods, 3(1), 26-51.

Schleicher, A. (2018). Educating learners for their future, not our past. ECNU Review of Education, 1(1), 58-75.

Stoica, E. (2022). A Student’s Take on Challenges of AI-driven Grading in Higher Education. TScIT 37, July 8, 2022, Enschede, The Netherlands.

Retrieved from https://essay.utwente.nl/91784/1/stoica_ba_eemcs.pdf VSO. (2019). The Action Research Guidebook. Progress is Only Possible by Working Together. Retrieved from https://www.vsointernational.org/ sites/default/files/2020-04/vso-cambodia-action-research-guideboo k- english.pdf

Walter, Y. (2024). Embracing the future of artificial intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21, 15.

Walvoord, B. E., & Johnson Anderson., V. (1998). Effective Grading: A Tool for Learning and Assessment. San Francisco: Jossey-Bass.

Zuber-Skerritt, O (Ed.). (1991). Action Research for Change and Development (1st ed.). England, UK: Routledge

Published

01-01-2025

How to Cite

MAHLATSE RAGOLANE, SHAHIEM PATEL, & PRANISHA SALIKRAM. (2025). AI VERSUS HUMAN GRADERS: ASSESSING THE ROLE OF LARGE LANGUAGE MODELS IN HIGHER EDUCATION. Innovare Journal of Social Sciences, 13(1), 1–10. https://doi.org/10.22159/ijss.2024v13i1.52834

Issue

Section

Original Article(s)