Abstract
Usage of large language models and chat bots will almost surely continue to grow, since they are so easy to use, and so (incredibly) credible. I would be more comfortable with this reality if we encouraged more evaluations with humans-in-The-loop to come up with a better characterization of when the machine can be trusted and when humans should intervene. This article will describe a homework assignment, where I asked my students to use tools such as chat bots and web search to write a number of essays. Even after considerable discussion in class on hallucinations, many of the essays were full of misinformation that should have been fact-checked. Apparently, it is easier to believe ChatGPT than to be skeptical. Fact-checking and web search are too much trouble.
Author supplied keywords
Cite
CITATION STYLE
Church, K. (2024, March 16). Emerging trends: When can users trust GPT, and when should they intervene? Natural Language Engineering. Cambridge University Press. https://doi.org/10.1017/S1351324923000578
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.