alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study

BibTex
Copy
@Article{Xu2024IsDS,
 author = {Shusheng Xu and Wei Fu and Jiaxuan Gao and Wenjie Ye and Weiling Liu and Zhiyu Mei and Guangju Wang and Chao Yu and Yi Wu},
 booktitle = {International Conference on Machine Learning},
 journal = {ArXiv},
 title = {Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study},
 volume = {abs/2404.10719},
 year = {2024}
}
GitHub
ReaLHF
319
HTTPS
https://github.com/openpsi-project/ReaLHF
SSH
git@github.com:openpsi-project/ReaLHF.git
CLI
gh repo clone openpsi-project/ReaLHF
Transform this paper into an audio lecture
Get an engaging lecture and Q&A format to quickly understand the paper in minutes, perfect for learning on the go.
Audio lecture
Q&A format