SOTAVerified

DPO-Shift: Shifting the Distribution of Direct Preference Optimization

2025-02-11Code Available2· sign in to hype

Xiliang Yang, Feng Jiang, Qianen Zhang, Lei Zhao, Xiao Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Direct Preference Optimization (DPO) and its variants have become increasingly popular for aligning language models with human preferences. These methods aim to teach models to better distinguish between chosen (or preferred) and rejected (or dispreferred) responses. However, prior research has identified that the probability of chosen responses often decreases during training, and this phenomenon is known as likelihood displacement. To tackle this challenge, in this work we introduce to controllably shift the distribution of the chosen probability. Then, we show that exhibits a fundamental trade-off between improving the chosen probability and sacrificing the reward margin, as supported by both theoretical analysis and experimental validation. Furthermore, we demonstrate the superiority of over DPO on downstream tasks such as MT-Bench and a designed win rate experiment. We believe this study shows that the likelihood displacement issue of DPO can be effectively mitigated with a simple, theoretically grounded solution. Our code is available at https://github.com/Meaquadddd/DPO-Shift.

Reproductions