This paper introduces Perturbation Gradient (PG) losses, a new family of differentiable surrogate losses for decision-focused learning within the predict-then-optimize framework. These losses are shown to be Lipschitz continuous and provide asymptotic guarantees for vanishing excess regret, even under model misspecification, outperforming existing benchmarks in such challenging scenarios.
View blog