Customizing Language Model Responses with Contrastive In-Context Learning

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Large language models (LLMs) are becoming increasingly important for machine learning applications. However, it can be challenging to align LLMs with our intent, particularly when we want to generate content that is preferable over others or when we want the LLM to respond in a certain style or tone that is hard to describe. To address this challenge, we propose an approach that uses contrastive examples to better describe our intent. This involves providing positive examples that illustrate the true intent, along with negative examples that show what characteristics we want LLMs to avoid. The negative examples can be retrieved from labeled data, written by a human, or generated by the LLM itself. Before generating an answer, we ask the model to analyze the examples to teach itself what to avoid. This reasoning step provides the model with the appropriate articulation of the user’s need and guides it towards generting a better answer. We tested our approach on both synthesized and real-world datasets, including StackExchange and Reddit, and found that it significantly improves performance compared to standard few-shot prompting.

Cite

CITATION STYLE

APA

Gao, X., & Das, K. (2024). Customizing Language Model Responses with Contrastive In-Context Learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 18039–18046). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i16.29760

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free