FocalPO: Enhancing Preference Optimizing by Focusing on Correct Preference Rankings

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Efficient preference optimization algorithms such as Direct Preference Optimization (DPO) have become a popular approach in aligning large language models (LLMs) with human preferences. These algorithms implicitly treat the LLM as a reward model, and focus on training it to correct misranked preference pairs. However, recent work (Chen et al., 2024) empirically finds that DPO training rarely improves these misranked preference pairs, despite its gradient emphasizing on these cases. We introduce FocalPO, a DPO variant that instead down-weighs misranked preference pairs and prioritizes enhancing the model’s understanding of pairs that it can already rank correctly. Inspired by Focal Loss used in vision tasks, FocalPO achieves this by adding a modulating factor to dynamically scale DPO loss. Our experiment demonstrates that FocalPO surpasses DPO and its variants on popular benchmarks like Alpaca Eval 2.0 and Arena-Hard using Mistral-Base-7B and Llama-3-Instruct-8B, with the introduced hyperparameter fixed. Additionally, we empirically reveals how FocalPO affects training on correct and incorrect sample groups, further underscoring its effectiveness 1

Cite

CITATION STYLE

APA

Liu, T., Yu, X., Zhou, W., Gu, J., & Tresp, V. (2025). FocalPO: Enhancing Preference Optimizing by Focusing on Correct Preference Rankings. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 256–267). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2025.acl-short.21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free