mathematical-finance
True Volterra equations are inherently non stationary and therefore do not admit genuine stationary regimes\textit{genuine stationary regimes} over finite horizons. This motivates the study of the finite-time behavior of the solutions to scaled inhomogeneous affine Stochastic Volterra equations through the lens of a weaker notion of stationarity referred to as fake stationary regime\textit{fake stationary regime} in the sense that all marginal distributions share the same expectation and variance. As a first application, we introduce the Fake stationary Volterra Heston model\textit{Fake stationary Volterra Heston model} and derive a closed-form expression for its characteristic function. Having established this finite-time proxy for stationarity, we then investigate the asymptotic (long-time) behavior to assess whether genuine stationary regimes emerge in the limit. Using an extension of the exponential-affine transformation formula for those processes, we establish in the long run the existence of limiting distributions, which (unlike in the case of classical affine diffusion processes) may depend on the initial state of the process, unless the Volterra kernel coincides with the α\alpha- fractional integration kernel, for which the dependence on the initial state vanishes. We then proceed to the construction of stationary processes associated with these limiting distributions. However, the dynamics in this long-term regime are analytically intractable, and the process itself is not guaranteed to be stationary in the classical sense over finite horizons. This highlights the relevance of finite-time analysis through the lens of the aforementioned fake stationarity\textit{fake stationarity}, which offers a tractable approximation to stationary behavior in genuinely non-stationary Volterra systems.
In this work, we introduce amortizing perpetual options (AmPOs), a fungible variant of continuous-installment options suitable for exchange-based trading. Traditional installment options lapse when holders cease their payments, destroying fungibility across units of notional. AmPOs replace explicit installment payments and the need for lapsing logic with an implicit payment scheme via a deterministic decay in the claimable notional. This amortization ensures all units evolve identically, preserving fungibility. Under the Black-Scholes framework, AmPO valuation can be reduced to an equivalent vanilla perpetual American option on a dividend-paying asset. In this way, analytical expressions are possible for the exercise boundaries and risk-neutral valuations for calls and puts. These formulas and relations allow us to derive the Greeks and study comparative statics with respect to the amortization rate. Illustrative numerical case studies demonstrate how the amortization rate shapes option behavior and reveal the resulting tradeoffs in the effective volatility sensitivity.
This paper develops a comprehensive theoretical framework that imports concepts from stochastic thermodynamics to model price impact and characterize the feasibility of round-trip arbitrage in financial markets. A trading cycle is treated as a non-equilibrium thermodynamic process, where price impact represents dissipative work and market noise plays the role of thermal fluctuations. The paper proves a Financial Second Law: under general convex impact functionals, any round-trip trading strategy yields non-positive expected profit. This structural constraint is complemented by a fluctuation theorem that bounds the probability of profitable cycles in terms of dissipated work and market volatility. The framework introduces a statistical ensemble of trading strategies governed by a Gibbs measure, leading to a free energy decomposition that connects expected cost, strategy entropy, and a market temperature parameter. The framework provides rigorous, testable inequalities linking microstructural impact to macroscopic no-arbitrage conditions, offering a novel physics-inspired perspective on market efficiency. The paper derives explicit analytical results for prototypical trading strategies and discusses empirical validation protocols.
We revisit Merton's continuous-time portfolio selection through a data-driven, distributionally robust lens. Our aim is to tap the benefits of frequent trading over short horizons while acknowledging that drift is hard to pin down, whereas volatility can be screened using realized or implied measures for appropriately selected assets. Rather than time-rectangular distributional robust control -- which replenishes adversarial power at every instant and induces over-pessimism -- we place a single ambiguity set on the drift prior within a Bayesian Merton model. This prior-level ambiguity preserves learning and tractability: a minimax swap reduces the robust control to optimizing a nonlinear functional of the prior, enabling Karatzas and Zhao \cite{KZ98}-type's closed-form evaluation for each candidate prior. We then characterize small-radius worst-case priors under Wasserstein uncertainty via an explicit asymptotically optimal pushforward of the nominal prior, and we calibrate the ambiguity radius through a nonlinear Wasserstein projection tailored to the Merton functional. Synthetic and real-data studies demonstrate reduced pessimism relative to DRC and improved performance over myopic DRO-Markowitz under frequent rebalancing.
This paper considers a stochastic control problem with Epstein-Zin recursive utility under partial information (unknown market price of risk), in which an investor is constrained to a liability at the end of the investment period. Introducing liabilities is the main novelty of the model and appears for the first time in the literature of recursive utilities. Such constraint leads to a fully coupled forward-backward stochastic differential equation (FBSDE), which well-posedness has not been addressed in the literature. We derive an explicit solution to the FBSDE, contrasting with the existence and uniqueness results with no explicit expression of the solutions typically found in most related literature. Moreover, under minimal additional assumptions, we obtain the Malliavin differentiability of the solution of the FBSDE. We solve the problem completely and find the expression of the controls and the value function. Finally, we determine the utility loss that investors suffer from ignoring the fact that they can learn about the market price of risk.
Generative AI can be framed as the problem of learning a model that maps simple reference measures into complex data distributions, and it has recently found a strong connection to the classical theory of the Schrödinger bridge problems (SBPs) due partly to their common nature of interpolating between prescribed marginals via entropy-regularized stochastic dynamics. However, the classical SBP enforces hard terminal constraints, which often leads to instability in practical implementations, especially in high-dimensional or data-scarce regimes. To address this challenge, we follow the idea of the so-called soft-constrained Schrödinger bridge problem (SCSBP), in which the terminal constraint is replaced by a general penalty function. This relaxation leads to a more flexible stochastic control formulation of McKean-Vlasov type. We establish the existence of optimal solutions for all penalty levels and prove that, as the penalty grows, both the controls and value functions converge to those of the classical SBP at a linear rate. Our analysis builds on Doob's h-transform representations, the stability results of Schrödinger potentials, Gamma-convergence, and a novel fixed-point argument that couples an optimization problem over the space of measures with an auxiliary entropic optimal transport problem. These results not only provide the first quantitative convergence guarantees for soft-constrained bridges but also shed light on how penalty regularization enables robust generative modeling, fine-tuning, and transfer learning.
We derive the arbitrage gains or, equivalently, Loss Versus Rebalancing (LVR) for arbitrage between \textit{two imperfectly liquid} markets, extending prior work that assumes the existence of an infinitely liquid reference market. Our result highlights that the LVR depends on the relative liquidity and relative trading volume of the two markets between which arbitrage gains are extracted. Our model assumes that trading costs on at least one of the markets is quadratic. This assumption holds well in practice, with the exception of highly liquid major pairs on centralized exchanges, for which we discuss extensions to other cost functions.
We study mean field portfolio games under Epstein-Zin preferences, which naturally encompass the classical time-additive power utility as a special case. In a general non-Markovian framework, we establish a uniqueness result by proving a one-to-one correspondence between Nash equilibria and the solutions to a class of BSDEs. A key ingredient in our approach is a necessary stochastic maximum principle tailored to Epstein-Zin utility and a nonlinear transformation. In the deterministic setting, we further derive an explicit closed-form solution for the equilibrium investment and consumption policies.
We introduce a framework for systemic risk modeling in insurance portfolios using jointly exchangeable arrays, extending classical collective risk models to account for interactions. We establish central limit theorems that asymptotically characterize total portfolio losses, providing a theoretical foundation for approximations in large portfolios and over long time horizons. These approximations are validated through simulation-based numerical experiments. Additionally, we analyze the impact of dependence on portfolio loss distributions, with a particular focus on tail behavior.
We introduce a new framework for optimal routing and arbitrage in AMM driven markets. This framework improves on the original best-practice convex optimization by restricting the search to the boundary of the optimal space. We can parameterize this boundary using a set of prices, and a potentially very high dimensional optimization problem (2 optimization variables per curve) gets reduced to a much lower dimensional root finding problem (1 optimization variable per token, regardless of the number of the curves). Our reformulation is similar to the dual problem of a reformulation of the original convex problem. We show our reformulation of the problem is equivalent to the original formulation except in the case of infinitely concentrated liquidity, where we provide a suitable approximation. Our formulation performs far better than the original one in terms of speed - we obtain an improvement of up to 200x against Clarabel, the new CVXPY default solver - and robustness, especially on levered curves.
We introduce a simple, efficient and accurate nonnegative preserving numerical scheme for simulating the square-root process. The novel idea is to simulate the integrated square-root process first instead of the square-root process itself. Numerical experiments on realistic parameter sets, applied for the integrated process and the Heston model, display high precision with a very low number of time steps. As a bonus, our scheme yields the exact limiting Inverse Gaussian distributions of the integrated square-root process with only one single time-step in two scenarios: (i) for high mean-reversion and volatility-of-volatility regimes, regardless of maturity; and (ii) for long maturities, independent of the other parameters.
Decentralized finance (DeFi) has revolutionized the financial landscape, with protocols like Uniswap offering innovative automated market-making mechanisms. This article explores the development of a backtesting framework specifically tailored for concentrated liquidity market makers (CLMM). The focus is on leveraging the liquidity distribution approximated using a parametric model, to estimate the rewards within liquidity pools. The article details the design, implementation, and insights derived from this novel approach to backtesting within the context of Uniswap V3. The developed backtester was successfully utilized to assess reward levels across several pools using historical data from 2023 (pools Uniswap v3 for pairs of altcoins, stablecoins and USDC/ETH with different fee levels). Moreover, the error in modeling the level of rewards for the period under review for each pool was less than 1\%. This demonstrated the effectiveness of the backtester in quantifying liquidity pool rewards and its potential in estimating LP's revenues as part of the pool rewards, as focus of our next research. The backtester serves as a tool to simulate trading strategies and liquidity provision scenarios, providing a quantitative assessment of potential returns for liquidity providers (LP). By incorporating statistical tools to mirror CLMM pool liquidity dynamics, this framework can be further leveraged for strategy enhancement and risk evaluation for LPs operating within decentralized exchanges.
Researchers developed "loss-versus-rebalancing" (LVR), a novel metric quantifying adverse selection costs for Automated Market Maker liquidity providers, offering a continuous-time framework that robustly tracks real-world LP performance after hedging market risk. This work from Columbia University and the University of Chicago provides a superior benchmark for AMM analysis and guides future protocol design.
We introduce the wavelet scattering spectra which provide non-Gaussian models of time-series having stationary increments. A complex wavelet transform computes signal variations at each scale. Dependencies across scales are captured by the joint correlation across time and scales of wavelet coefficients and their modulus. This correlation matrix is nearly diagonalized by a second wavelet transform, which defines the scattering spectra. We show that this vector of moments characterizes a wide range of non-Gaussian properties of multi-scale processes. We prove that self-similar processes have scattering spectra which are scale invariant. This property can be tested statistically on a single realization and defines a class of wide-sense self-similar processes. We build maximum entropy models conditioned by scattering spectra coefficients, and generate new time-series with a microcanonical sampling algorithm. Applications are shown for highly non-Gaussian financial and turbulence time-series.
Overnight rates, such as the SOFR (Secured Overnight Financing Rate) in the US, are central to the current reform of interest rate benchmarks. A striking feature of overnight rates is the presence of jumps and spikes occurring at predetermined dates due to monetary policy interventions and liquidity constraints. This corresponds to stochastic discontinuities (i.e., discontinuities occurring at ex-ante known points in time) in their dynamics. In this work, we propose a term structure modelling framework based on overnight rates and characterize absence of arbitrage in a generalised Heath-Jarrow-Morton (HJM) setting. We extend the classical short-rate approach to accommodate stochastic discontinuities, developing a tractable setup driven by affine semimartingales. In this context, we show that simple specifications allow to capture stylized facts of the jump behavior of overnight rates. In a Gaussian setting, we provide explicit valuation formulas for bonds and caplets. Furthermore, we investigate hedging in the sense of local risk-minimization when the underlying term structures feature stochastic discontinuities.
We provided proof here that coefficient of variation (CV) is a direct measure of risk using an equation that has been derived here for the first time. We also presented a method to generate a stock CV based on return that strongly correlates with stock price performance. Consequently, we found that the price growths of stocks with low but positive CV are approximately exponential which explains our finding here that the total return of US domestic stocks within $0 \le CV \le 1$ between Dec 2008 to Dec 2018 averaged at around 475% and outperformed the average total return of stocks within CV>1CV > 1 and CV>4CV > 4 by 144% and 2000%, respectively. From these observations, we posit that minimizing portfolio CV does not only minimize risk but also maximizes return. Minimizing risk by minimizing the standard deviation of return (volatility) as espoused by the Modern Portfolio Theory only resulted in a meager average total return of 15%, and the low-risk (low volatility) portfolio outperformed the high-risk portfolio by only 25%. These observations suggest that CV is a more reliable measure of risk than volatility.
We explore the evolution of daily returns of four major US stock market indices during the technology crash of 2000, and the financial crisis of 2007-2009. Our methodology is based on topological data analysis (TDA). We use persistence homology to detect and quantify topological patterns that appear in multidimensional time series. Using a sliding window, we extract time-dependent point cloud data sets, to which we associate a topological space. We detect transient loops that appear in this space, and we measure their persistence. This is encoded in real-valued functions referred to as a 'persistence landscapes'. We quantify the temporal changes in persistence landscapes via their LpL^p-norms. We test this procedure on multidimensional time series generated by various non-linear and non-equilibrium models. We find that, in the vicinity of financial meltdowns, the LpL^p-norms exhibit strong growth prior to the primary peak, which ascends during a crash. Remarkably, the average spectral density at low frequencies of the time series of LpL^p-norms of the persistence landscapes demonstrates a strong rising trend for 250 trading days prior to either dotcom crash on 03/10/2000, or to the Lehman bankruptcy on 09/15/2008. Our study suggests that TDA provides a new type of econometric analysis, which goes beyond the standard statistical measures. The method can be used to detect early warning signals of imminent market crashes. We believe that this approach can be used beyond the analysis of financial time series presented here.
Recognizing subtle historical patterns is central to modeling and forecasting problems in time series analysis. Here we introduce and develop a new approach to quantify deviations in the underlying hidden generators of observed data streams, resulting in a new efficiently computable universal metric for time series. The proposed metric is in the sense that we can compare and contrast data streams regardless of where and how they are generated and without any feature engineering step. The approach proposed in this paper is conceptually distinct from our previous work on data smashing, and vastly improves discrimination performance and computing speed. The core idea here is the generalization of the notion of KL divergence often used to compare probability distributions to a notion of divergence in time series. We call this the sequence likelihood (SL) divergence, which may be used to measure deviations within a well-defined class of discrete-valued stochastic processes. We devise efficient estimators of SL divergence from finite sample paths and subsequently formulate a universal metric useful for computing distance between time series produced by hidden stochastic generators.
The problem of rapid and automated detection of distinct market regimes is a topic of great interest to financial mathematicians and practitioners alike. In this paper, we outline an unsupervised learning algorithm for clustering financial time-series into a suitable number of temporal segments (market regimes). As a special case of the above, we develop a robust algorithm that automates the process of classifying market regimes. The method is robust in the sense that it does not depend on modelling assumptions of the underlying time series as our experiments with real datasets show. This method -- dubbed the Wasserstein kk-means algorithm -- frames such a problem as one on the space of probability measures with finite pthp^\text{th} moment, in terms of the pp-Wasserstein distance between (empirical) distributions. We compare our WK-means approach with a more traditional clustering algorithms by studying the so-called maximum mean discrepancy scores between, and within clusters. In both cases it is shown that the WK-means algorithm vastly outperforms all considered competitor approaches. We demonstrate the performance of all approaches both in a controlled environment on synthetic data, and on real data.
The rapid changes in the finance industry due to the increasing amount of data have revolutionized the techniques on data processing and data analysis and brought new theoretical and computational challenges. In contrast to classical stochastic control theory and other analytical approaches for solving financial decision-making problems that heavily reply on model assumptions, new developments from reinforcement learning (RL) are able to make full use of the large amount of financial data with fewer model assumptions and to improve decisions in complex financial environments. This survey paper aims to review the recent developments and use of RL approaches in finance. We give an introduction to Markov decision processes, which is the setting for many of the commonly used RL approaches. Various algorithms are then introduced with a focus on value and policy based methods that do not require any model assumptions. Connections are made with neural networks to extend the framework to encompass deep RL algorithms. Our survey concludes by discussing the application of these RL algorithms in a variety of decision-making problems in finance, including optimal execution, portfolio optimization, option pricing and hedging, market making, smart order routing, and robo-advising.
There are no more papers matching your filters at the moment.