optimization-and-control
This survey paper elucidates how diverse machine learning tasks, including generative modeling and network optimization, can be framed as the evolution of probability distributions over time. It provides a unified mathematical framework by connecting optimal transport and diffusion processes, clarifying their applications and distinct properties within advanced machine learning paradigms.
Worst-case generation plays a critical role in evaluating robustness and stress-testing systems under distribution shifts, in applications ranging from machine learning models to power grids and medical prediction systems. We develop a generative modeling framework for worst-case generation for a pre-specified risk, based on min-max optimization over continuous probability distributions, namely the Wasserstein space. Unlike traditional discrete distributionally robust optimization approaches, which often suffer from scalability issues, limited generalization, and costly worst-case inference, our framework exploits the Brenier theorem to characterize the least favorable (worst-case) distribution as the pushforward of a transport map from a continuous reference measure, enabling a continuous and expressive notion of risk-induced generation beyond classical discrete DRO formulations. Based on the min-max formulation, we propose a Gradient Descent Ascent (GDA)-type scheme that updates the decision model and the transport map in a single loop, establishing global convergence guarantees under mild regularity assumptions and possibly without convexity-concavity. We also propose to parameterize the transport map using a neural network that can be trained simultaneously with the GDA iterations by matching the transported training samples, thereby achieving a simulation-free approach. The efficiency of the proposed method as a risk-induced worst-case generator is validated by numerical experiments on synthetic and image data.
We propose a new notion of the formal tangent space to the Wasserstein space P(X)\mathcal{P}(X) at a given measure. Modulo an integrability condition, we say that this tangent space is made of functions over XX which are valued in the probability measures over the tangent bundle to XX. This generalization of previous concepts of tangent spaces allows us to define appropriate notions of parallel transport, C1,α\mathcal{C}^{1,\alpha} regularity over P(X)\mathcal{P}(X) and translation of a curve over P(X)\mathcal{P}(X).
Since the 1990s, considerable empirical work has been carried out to train statistical models, such as neural networks (NNs), as learned heuristics for combinatorial optimization (CO) problems. When successful, such an approach eliminates the need for experts to design heuristics per problem type. Due to their structure, many hard CO problems are amenable to treatment through reinforcement learning (RL). Indeed, we find a wealth of literature training NNs using value-based, policy gradient, or actor-critic approaches, with promising results, both in terms of empirical optimality gaps and inference runtimes. Nevertheless, there has been a paucity of theoretical work undergirding the use of RL for CO problems. To this end, we introduce a unified framework to model CO problems through Markov decision processes (MDPs) and solve them using RL techniques. We provide easy-to-test assumptions under which CO problems can be formulated as equivalent undiscounted MDPs that provide optimal solutions to the original CO problems. Moreover, we establish conditions under which value-based RL techniques converge to approximate solutions of the CO problem with a guarantee on the associated optimality gap. Our convergence analysis provides: (1) a sufficient rate of increase in batch size and projected gradient descent steps at each RL iteration; (2) the resulting optimality gap in terms of problem parameters and targeted RL accuracy; and (3) the importance of a choice of state-space embedding. Together, our analysis illuminates the success (and limitations) of the celebrated deep Q-learning algorithm in this problem context.
In this paper, we examine the convergence landscape of multi-agent learning under uncertainty. Specifically, we analyze two stochastic models of regularized learning in continuous games -- one in continuous and one in discrete time with the aim of characterizing the long-run behavior of the induced sequence of play. In stark contrast to deterministic, full-information models of learning (or models with a vanishing learning rate), we show that the resulting dynamics do not converge in general. In lieu of this, we ask instead which actions are played more often in the long run, and by how much. We show that, in strongly monotone games, the dynamics of regularized learning may wander away from equilibrium infinitely often, but they always return to its vicinity in finite time (which we estimate), and their long-run distribution is sharply concentrated around a neighborhood thereof. We quantify the degree of this concentration, and we show that these favorable properties may all break down if the underlying game is not strongly monotone -- underscoring in this way the limits of regularized learning in the presence of persistent randomness and uncertainty.
In this paper, we propose objective-evaluation-free (OEF) variants of the proximal Newton method for nonconvex composite optimization problems and the regularized Newton method for unconstrained optimization problems, respectively, using inexact evaluations of gradients and Hessians. Theoretical analysis demonstrates that the global/local convergence rates of the proposed algorithms are consistent with those achieved when both objective function and derivatives are evaluated exactly. Additionally, we present an OEF regularized Newton and negative curvature algorithm that uses inexact derivatives to find approximate second-order stationary points for unconstrained optimization problems. The worst-case iteration/(sample) operation complexity of the proposed algorithm matches the optimal results reported in the literature.
Input features are conventionally represented as vectors, matrices, or third order tensors in the real field, for color image classification. Inspired by the success of quaternion data modeling for color images in image recovery and denoising tasks, we propose a novel classification method for color image classification, named as the Low-rank Support Quaternion Matrix Machine (LSQMM), in which the RGB channels are treated as pure quaternions to effectively preserve the intrinsic coupling relationships among channels via the quaternion algebra. For the purpose of promoting low-rank structures resulting from strongly correlated color channels, a quaternion nuclear norm regularization term, serving as a natural extension of the conventional matrix nuclear norm to the quaternion domain, is added to the hinge loss in our LSQMM model. An Alternating Direction Method of Multipliers (ADMM)-based iterative algorithm is designed to effectively resolve the proposed quaternion optimization model. Experimental results on multiple color image classification datasets demonstrate that our proposed classification approach exhibits advantages in classification accuracy, robustness and computational efficiency, compared to several state-of-the-art methods using support vector machines, support matrix machines, and support tensor machines.
In this paper, we examine the robustness of Nash equilibria in continuous games, under both strategic and dynamic uncertainty. Starting with the former, we introduce the notion of a robust equilibrium as those equilibria that remain invariant to small -- but otherwise arbitrary -- perturbations to the game's payoff structure, and we provide a crisp geometric characterization thereof. Subsequently, we turn to the question of dynamic robustness, and we examine which equilibria may arise as stable limit points of the dynamics of "follow the regularized leader" (FTRL) in the presence of randomness and uncertainty. Despite their very distinct origins, we establish a structural correspondence between these two notions of robustness: strategic robustness implies dynamic robustness, and, conversely, the requirement of strategic robustness cannot be relaxed if dynamic robustness is to be maintained. Finally, we examine the rate of convergence to robust equilibria as a function of the underlying regularizer, and we show that entropically regularized learning converges at a geometric rate in games with affinely constrained action spaces.
Learned image reconstruction has become a pillar in computational imaging and inverse problems. Among the most successful approaches are learned iterative networks, which are formulated by unrolling classical iterative optimisation algorithms for solving variational problems. While the underlying algorithm is usually formulated in the functional analytic setting, learned approaches are often viewed as purely discrete. In this chapter we present a unified operator view for learned iterative networks. Specifically, we formulate a learned reconstruction operator, defining how to compute, and separately the learning problem, which defines what to compute. In this setting we present common approaches and show that many approaches are closely related in their core. We review linear as well as nonlinear inverse problems in this framework and present a short numerical study to conclude.
Researchers from Ho Chi Minh City University of Technology and Teesside University investigated social welfare optimization in cooperation dilemmas, finding that strategies maximizing overall societal benefit often diverge from those solely minimizing institutional cost or maximizing cooperation frequency. Their work identifies distinct optimal incentive schemes when prioritizing social welfare in both well-mixed and structured populations.
Standard complexity analyses for weakly convex optimization rely on the Moreau envelope technique proposed by Davis and Drusvyatskiy (2019). The main insight is that nonsmooth algorithms, such as proximal subgradient, proximal point, and their stochastic variants, implicitly minimize a smooth surrogate function induced by the Moreau envelope. Meanwhile, explicit smoothing, which directly minimizes a smooth approximation of the objective, has long been recognized as an efficient strategy for nonsmooth optimization. In this paper, we generalize the notion of smoothable functions, which was proposed by Beck and Teboulle (2012) for nonsmooth convex optimization. This generalization provides a unified viewpoint on several important smoothing techniques for weakly convex optimization, including Nesterov-type smoothing and Moreau envelope smoothing. Our theory yields a framework for designing smooth approximation algorithms for both deterministic and stochastic weakly convex problems with provable complexity guarantees. Furthermore, our theory extends to the smooth approximation of non-Lipschitz functions, allowing for complexity analysis even when global Lipschitz continuity does not hold.
We propose a quasi-Newton-type method for nonconvex optimization with Lipschitz continuous gradients and Hessians. The algorithm finds an ε\varepsilon-stationary point within O~(d1/4ε13/8)\tilde{\mathrm{O}}(d^{1/4} \varepsilon^{-13/8}) gradient evaluations, where dd is the problem dimension. Although this bound includes an additional logarithmic factor compared with the best known complexity, our method is parameter-free in the sense that it requires no prior knowledge of problem-dependent parameters such as Lipschitz constants or the optimal value. Moreover, it does not need the target accuracy ε\varepsilon or the total number of iterations to be specified in advance. The result is achieved by combining several key ideas: momentum-based acceleration, quartic regularization for subproblems, and a scaled variant of the Powell-symmetric-Broyden (PSB) update.
Research establishes a theoretical condition for the efficacy of spectral gradient methods in deep learning, showing they are beneficial when the gradient's nuclear-to-Frobenius ratio is high relative to the low stable rank of post-activation matrices. This framework explains why these methods often excel in optimizing deep neural networks by adapting to the inherent degeneracy of internal representations.
9
In this paper, we revisit a classical adaptive stepsize strategy for gradient descent: the Polyak stepsize (\texttt{PolyakGD}), originally proposed in \cite{polyak1969minimization}. We study the convergence behavior of \texttt{PolyakGD} from two perspectives: tight worst-case analysis and universality across function classes. As our first main result, we establish the tightness of the known convergence rates of \texttt{PolyakGD} by explicitly constructing worst-case functions. In particular, we show that the O((11κ)K)\mathcal{O}((1-\frac{1}{\kappa})^K) rate for smooth strongly convex functions and the O(1/K)\mathcal{O}(1/K) rate for smooth convex functions are both tight. Moreover, we theoretically show that \texttt{PolyakGD} automatically exploits floating-point errors to escape the worst-case behavior. Our second main result provides new convergence guarantees for \texttt{PolyakGD} under both Hölder smoothness and Hölder growth conditions. These findings show that the Polyak stepsize is universal, automatically adapting to various function classes without requiring prior knowledge of problem parameters.
The Polyak-Łojasiewicz (PŁ) inequality extends the favorable optimization properties of strongly convex functions to a broader class of functions. In this paper, we show that the richness of the class of PŁ functions is rooted in the nonsmooth case since sufficient regularity forces them to be essentially strongly convex. More precisely, we prove that if ff is a C2C^2 PŁ function having a bounded set of minimizers, then it has a unique minimizer and is strongly convex on a sublevel set of the form {fa}\{f\leq a\}.
The connection between control algorithms for Markov decision processes and optimization algorithms has been implicitly and explicitly exploited since the introduction of dynamic programming algorithm by Bellman in the 1950s. Recently, this connection has attracted a lot of attention for developing new control algorithms inspired by well-established optimization algorithms. In this paper, we make this analogy explicit across four problem classes with a unified solution characterization. This novel framework, in turn, allows for a systematic transformation of algorithms from one domain to the other. In particular, we identify equivalent optimization and control algorithms that have already been pointed out in the existing literature, but mostly in a scattered way. We also discuss the issues arising in providing theoretical convergence guarantees for these new control algorithms and provide simple yet effective techniques to solve them. The provided framework and techniques then lay out a concrete methodology for developing new convergent control algorithms.
Optimal pulse patterns (OPPs) are a modulation technique in which a switching signal is computed offline through an optimization process that accounts for selected performance criteria, such as current harmonic distortion. The optimization determines both the switching angles (i.e., switching times) and the pattern structure (i.e., the sequence of voltage levels). This optimization task is a challenging mixed-integer nonconvex problem, involving integer-valued voltage levels and trigono metric nonlinearities in both the objective and the constraints. We address this challenge by reinterpreting OPP design as a periodic mode-selecting optimal control problem of a hybrid system, where selecting angles and levels corresponds to choosing jump times in a transition graph. This time-domain formulation enables the direct use of convex-relaxation techniques from optimal control, producing a hierarchy of semidefinite programs that lower-bound the minimal achievable harmonic distortion and scale subquadratically with the number of converter levels and switching angles. Numerical results demonstrate the effectiveness of the proposed approachs
Preliminary mission design of low-thrust spacecraft trajectories in the Circular Restricted Three-Body Problem is a global search characterized by a complex objective landscape and numerous local minima. Formulating the problem as sampling from an unnormalized distribution supported on neighborhoods of locally optimal solutions, provides the opportunity to deploy Markov chain Monte Carlo methods and generative machine learning. In this work, we extend our previous self-supervised diffusion model fine-tuning framework to employ gradient-informed Markov chain Monte Carlo. We compare two algorithms - the Metropolis-Adjusted Langevin Algorithm and Hamiltonian Monte Carlo - both initialized from a distribution learned by a diffusion model. Derivatives of an objective function that balances fuel consumption, time of flight and constraint violations are computed analytically using state transition matrices. We show that incorporating the gradient drift term accelerates mixing and improves convergence of the Markov chain for a multi-revolution transfer in the Saturn-Titan system. Among the evaluated methods, MALA provides the best trade-off between performance and computational cost. Starting from samples generated by a baseline diffusion model trained on a related transfer, MALA explicitly targets Pareto-optimal solutions. Compared to a random walk Metropolis algorithm, it increases the feasibility rate from 17.34% to 63.01% and produces a denser, more diverse coverage of the Pareto front. By fine-tuning a diffusion model on the generated samples and associated reward values with reward-weighted likelihood maximization, we learn the global solution structure of the problem and eliminate the need for a tedious separate data generation phase.
In this article, we explore the use of various matrix norms for optimizing functions of weight matrices, a crucial problem in training large language models. Moving beyond the spectral norm underlying the Muon update, we leverage duals of the Ky Fan kk-norms to introduce a family of Muon-like algorithms we name Fanions, which are closely related to Dion. By working with duals of convex combinations of the Ky Fan kk-norms with either the Frobenius norm or the ll_\infty norm, we construct the families of F-Fanions and S-Fanions, respectively. Their most prominent members are F-Muon and S-Muon. We complement our theoretical analysis with an extensive empirical study of these algorithms across a wide range of tasks and settings, demonstrating that F-Muon and S-Muon consistently match Muon's performance, while outperforming vanilla Muon on a synthetic linear least squares problem.
Newton's method is the most widespread high-order method, demanding the gradient and the Hessian of the objective function. However, one of the main disadvantages of Newtons method is its lack of global convergence and high iteration cost. Both these drawbacks are critical for modern optimization motivated primarily by current applications in machine learning. In this paper, we introduce a novel algorithm to deal with these disadvantages. Our method can be implemented with various Hessian approximations, including methods that use only the first-order information. Thus, computational costs might be drastically reduced. Also, it can be adjusted to problems' geometries via the usage of different Bregman divergences. The proposed method converges for nonconvex and convex problems globally and it has the same rates as other well-known methods that lack mentioned properties. We present experiments validating our method performs according to the theoretical bounds and shows competitive performance among other Newton-based methods.
There are no more papers matching your filters at the moment.