Abstract
The emergence of pre-trained Large Language Model (LLM) has opened up new possibilities for people to access language resources at their fingertips. Previously, patterns of language could be difficult to derive from large-scale documents, which impeded people from processing and extracting information contained within. Observations from common users' practices and experiences suggest that LLM may appear to possess certain capacities for processing, handling and working with not just human language, but also the associated knowledge. However, the original construction of LLM is essentially language-centric, which is not more than a probabilistic model representing and summarizing language patterns from large language corpora, without deliberately incorporating other types of data or information (e.g., user behaviors, domain concepts) into the model construction. Consequently, when using LLM in the real-world, it's not uncommon to appropriate and re-purpose an LLM for handling tasks that don't necessarily match what it's built for. In this poster, we present an exploratory study aimed at understanding how people interact with an LLM, chatGPT, to obtain instructions to work on a problem-solving task, installing Python on a remote computer. The results reveal that users' literacy and expectations concerning LLM can influence how they perceive and utilize it. Surprisingly, low-literacy participants with limited understanding of LLM appear to benefit more, producing implications for designing user-centric AI/ML tools.
Author supplied keywords
Cite
CITATION STYLE
Zhu, Q., & Wang, H. C. (2023). Leveraging Large Language Model as Support for Human Problem Solving: An Exploration of Its Appropriation and Impact. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW (pp. 333–337). Association for Computing Machinery. https://doi.org/10.1145/3584931.3606965
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.