Abstract
With the advancement of technology in this era, chatbots have become more than just robots, as they used to conduct time-consuming and labor-intensive routine tasks. Now, it is more than just a robot for routine duties; it interacts and produces like a human. Despite the efficacy and productivity of chatbots like ChatGPT-4 and Bard, there will be significant ethical implications for the academic community, particularly students and researchers. The current study is experimenting with ChatGPT-4 and Bard by producing scientific articles with specific criteria, then applying topic modeling to assess the extent to which the content of the articles is related to the required topic, and verifying references, plagiarism, and the accuracy of the chatbot-generated articles. The results indicated that the content is relevant to the topic, and the accuracy of ChatGPT-4 is greater than Bard. ChatGPT-4 achieved 96%, and the majority of the bibliographies are accurate, whereas Bard achieved 52%, and the majority of bibliographies are incorrect, and some are not available. It is unethical to rely on a chatbot to produce scientific content, despite its accuracy, because it is not as accurate as humans and requires a thorough review of the content it generates. Furthermore, it alters his responses based on the individual he is interrogating, regardless of whether his answers are correct, as he is unable to defend his knowledge.
Author supplied keywords
Cite
CITATION STYLE
Al-Harbi, N. K., & Al-Shargabi, A. A. (2023). An Exploratory Analysis of using Chatbots in Academia. International Journal of Advanced Computer Science and Applications, 14(12), 122–132. https://doi.org/10.14569/IJACSA.2023.0141212
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.