We introduce a novel approach addressing global analysis of a difficult class
of nonconvex-nonsmooth optimization problems within the important framework of
Lagrangian-based methods. This genuine nonlinear class captures many problems
in modern disparate fields of applications. It features complex geometries,
qualification conditions, and other regularity properties do not hold
everywhere. To address these issues we work along several research lines to
develop an original general Lagrangian methodology which can deal, all at once,
with the above obstacles. A first innovative feature of our approach is to
introduce the concept of Lagrangian sequences for a broad class of algorithms.
Central to this methodology is the idea of turning an arbitrary descent method
into a multiplier method. Secondly, we provide these methods with a
transitional regime allowing us to identify in finitely many steps a zone where
we can tune the step-sizes of the algorithm for the final converging regime.
Then, despite the min-max nature of Lagrangian methods, using an original
Lyapunov method we prove that each bounded sequence generated by the resulting
monitoring schemes are globally convergent to a critical point for some
fundamental Lagrangian-based methods in the broad semialgebraic setting, which
to the best of our knowledge, are the first of this kind.