Towards Simple Hybrid Language Model Reasoning Through Human Explanations Enhanced Prompts

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Large Pre-Trained Models (LLMs) have reached state-of-the-art performance in various Ntural Language Processing (NLP) application tasks. However, an issue remains these models may confidently output incorrect answers, flawed reasoning, or even entirely hallucinate answers. Truly integrating human feedback and corrections is difficult for LLMs, as the traditional approach of fine-tuning is challenging and compute-intensive for LLMs, and the weights for the best models are often not publicly available. However, the ability to interact with these models in natural language opens up new possibilities for Hybrid AI. In this work, we present a very early exploration of Human-Explanations-Enhanced Prompting (HEEP), an approach that aims to help LLMs learn from human annotators' input by storing corrected reasonings and retrieving them on the fly to integrate them into prompts given to the model. Our preliminary results support the idea that HEEP could represent an initial step towards cheap alternatives to fine-tuning and developing human-in-the-loop classification methods at scale, encouraging more efficient interactions between human annotators and LLMs.

Cite

CITATION STYLE

APA

Clavié, B., Soulié, G., Naylor, F., & Brightwell, T. (2023). Towards Simple Hybrid Language Model Reasoning Through Human Explanations Enhanced Prompts. In Frontiers in Artificial Intelligence and Applications (Vol. 368, pp. 379–381). IOS Press BV. https://doi.org/10.3233/FAIA230103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free