Exploring the Use of AI-Powered Chatbots and Writing Assistants on Academic Integrity in Zambia’s Higher Learning Institutions

Thelma, Chanda Chansa and Mumbi, Memory and Sain, Zohaib Hassan and Pedzisai, Domboka Robert and Mweemba, Boyd and Sylvester, Chisebe (2025) Exploring the Use of AI-Powered Chatbots and Writing Assistants on Academic Integrity in Zambia’s Higher Learning Institutions. Asian Journal of Research in Computer Science, 18 (4). pp. 285-300. ISSN 2581-8260

Full text not available from this repository.

Abstract

AI-powered chatbots and writing assistants are transforming academic practices in Zambia’s higher learning institutions, raising both opportunities and challenges concerning academic integrity. These tools enhance student learning by providing instant feedback, improving writing quality, and assisting with research; however, they also pose ethical concerns related to plagiarism, authenticity, and critical thinking skills. The ease of access to AI-generated content increases the risk of academic dishonesty, as students may misuse these technologies to complete assignments without genuine effort. Hence, this study was conducted to assess the effect of AI-powered chatbots and writing assistants on academic integrity. The study adopted a mixed-methods research design, combining quantitative and qualitative approaches. The study was conducted in three higher learning institutions within Lusaka district of Zambia and sampled 345 respondents. The data collection process involved distributing the questionnaires to the selected participants and conducting individual interviews. Also, document analysis was utilized as secondary data collection tool. The data collected were analyzed using appropriate statistical methods, such as SPSS and Microsoft excel as well as research themes. The findings revealed that while these AI tools enhance students' access to instant academic support, they also contribute to increased risks of academic dishonesty. Additionally, the effectiveness of institutional policies in mitigating AI-related academic misconduct remains limited due to inadequate enforcement mechanisms and lack of awareness among students and educators. On the other hand, the study also revealed limitations such as limited awareness and usage, self-reported data bias, ethical and privacy concerns, evolving AI capabilities and lack of localized AI models. Therefore, the study recommended that universities should strengthen academic integrity policies by implementing standardized AI-detection tools, conducting regular faculty and student training, and fostering a culture of academic honesty through awareness campaigns and stricter enforcement mechanisms.

Item Type: Article
Subjects: Open Library Press > Computer Science
Depositing User: Unnamed user with email support@openlibrarypress.com
Date Deposited: 03 Apr 2025 09:28
Last Modified: 03 Apr 2025 09:28
URI: http://data.ms4sub.com/id/eprint/2173

Actions (login required)

View Item
View Item