Zhongtai Securities Institute for Financial Studies
The existing Fréchet regression is actually defined within a linear framework, since the weight function in the Fréchet objective function is linearly defined, and the resulting Fréchet regression function is identified to be a linear model when the random object belongs to a Hilbert space. Even for nonparametric and semiparametric Fréchet regressions, which are usually nonlinear, the existing methods handle them by local linear (or local polynomial) technique, and the resulting Fréchet regressions are (locally) linear as well. We in this paper introduce a type of nonlinear Fréchet regressions. Such a framework can be utilized to fit the essentially nonlinear models in a general metric space and uniquely identify the nonlinear structure in a Hilbert space. Particularly, its generalized linear form can return to the standard linear Fréchet regression through a special choice of the weight function. Moreover, the generalized linear form possesses methodological and computational simplicity because the Euclidean variable and the metric space element are completely separable. The favorable theoretical properties (e.g. the estimation consistency and presentation theorem) of the nonlinear Fréchet regressions are established systemically. The comprehensive simulation studies and a human mortality data analysis demonstrate that the new strategy is significantly better than the competitors.
In this paper, we investigate the well-posedness of quadratic backward stochastic differential equations driven by G-Brownian motion (referred to as G-BSDEs) with double mean reflections. By employing a representation of the solution via G-BMO martingale techniques, along with fixed point arguments, the Skorokhod problem, the backward Skorokhod problem, and the {\theta}-method, we establish existence and uniqueness results for such G-BSDEs under both bounded and unbounded terminal conditions.
In this paper, we define the squared G-Bessel process as the square of the modulus of a class of G-Brownian motions and establish that it is the unique solution to a stochastic differential equation. We then derive several path properties of the squared G-Bessel process, which are more profound in the capacity sense. Furthermore, we provide upper and lower bounds for the Laplace transform of the squared G-Bessel process. Finally, we prove that the time-space transformed squared G-Bessel process is a G'-CIR process.
We are interested in renewable estimations and algorithms for nonparametric models with streaming data. In our method, the nonparametric function of interest is expressed through a functional depending on a weight function and a conditional distribution function (CDF). The CDF is estimated by renewable kernel estimations combined with function interpolations, based on which we propose the method of renewable weighted composite quantile regression (WCQR). Then we fully use the model structure and obtain new selectors for the weight function, such that the WCQR can achieve asymptotic unbiasness when estimating specific functions in the model. We also propose practical bandwidth selectors for streaming data and find the optimal weight function minimizing the asymptotic variance. The asymptotical results show that our estimator is almost equivalent to the oracle estimator obtained from the entire data together. Besides, our method also enjoys adaptiveness to error distributions, robustness to outliers, and efficiency in both estimation and computation. Simulation studies and real data analyses further confirm our theoretical findings.
Detecting a minor average treatment effect is a major challenge in large-scale applications, where even minimal improvements can have a significant economic impact. Traditional methods, reliant on normal distribution-based or expanded statistics, often fail to identify such minor effects because of their inability to handle small discrepancies with sufficient sensitivity. This work leverages a counterfactual outcome framework and proposes a maximum probability-driven two-armed bandit (TAB) process by weighting the mean volatility statistic, which controls Type I error. The implementation of permutation methods further enhances the robustness and efficacy. The established strategic central limit theorem (SCLT) demonstrates that our approach yields a more concentrated distribution under the null hypothesis and a less concentrated one under the alternative hypothesis, greatly improving statistical power. The experimental results indicate a significant improvement in the A/B testing, highlighting the potential to reduce experimental costs while maintaining high statistical power.
This article studies inverse reinforcement learning (IRL) for the stochastic linear-quadratic optimal control problem, where two agents are considered. A learner agent does not know the expert agent's performance cost function, but it imitates the behavior of the expert agent by constructing an underlying cost function that obtains the same optimal feedback control as the expert's. We first develop a model-based IRL algorithm, which consists of a policy correction and a policy update from the policy iteration in reinforcement learning, as well as a cost function weight reconstruction based on the inverse optimal control. Then, under this scheme, we propose a model-free off-policy IRL algorithm, which does not need to know or identify the system and only needs to collect the behavior data of the expert agent and learner agent once during the iteration process. Moreover, the proofs of the algorithm's convergence, stability, and non-unique solutions are given. Finally, a simulation example is provided to verify the effectiveness of the proposed algorithm.
By using a formulation of a class of compressible viscous flows with a heat source via vorticity and expansion-rate, we study the Oberbeck-Boussinesq flows. To this end we establish a new integral representation for solutions of parabolic equations subject to certain boundary condition, which allows us to develop a random vortex method for certain compressible flows and to compute numerically solutions of their dynamical models. Numerical experiments are carried out, which not only capture detailed Bénard convection but also are capable of providing additional information on the fluid density and the dynamics of expansion-rate of the flow.
We develop a numerical method for simulation of incompressible viscous flows by integrating the technology of random vortex method with the core idea of Large Eddy Simulation (LES). Specifically, we utilize the filtering method in LES, interpreted as spatial averaging, along with the integral representation theorem for parabolic equations, to achieve a closure scheme which may be used for calculating solutions of Navier-Stokes equations. This approach circumvents the challenge associated with handling the non-locally integrable 3-dimensional integral kernel in the random vortex method and facilitates the computation of numerical solutions for flow systems via Monte-Carlo method. Numerical simulations are carried out for both laminar and turbulent flows, demonstrating the validity and effectiveness of the method.
Dimension reduction and data quantization are two important methods for reducing data complexity. In the paper, we study the methodology of first reducing data dimension by random projection and then quantizing the projections to ternary or binary codes, which has been widely applied in classification. Usually, the quantization will seriously degrade the accuracy of classification due to high quantization errors. Interestingly, however, we observe that the quantization could provide comparable and often superior accuracy, as the data to be quantized are sparse features generated with common filters. Furthermore, this quantization property could be maintained in the random projections of sparse features, if both the features and random projection matrices are sufficiently sparse. By conducting extensive experiments, we validate and analyze this intriguing property.
This paper investigate a class of multi-dimensional backward stochastic differential equations (BSDEs) with singualr generators exhibiting diagonally quadratic growth and unbounded terminal conditions, thereby extending results in the literature. We present an example of such equations in optimal investment decision.
We investigate two-barriers-reflected backward stochastic differential equations with data from rank-based stochastic differential equation. More specifically, we focus on the solution of backward stochastic differential equations restricted to two prescribed upper-boundary and lower-boundary processes. We rigorously show that this solution gives a probabilistic expression to the viscosity solution of some obstacle problems for the corresponding parabolic partial differential equations. As an application, the pricing problem of an American game option is studied.
In recent years, stabilizing unknown dynamical systems has became a critical problem in control systems engineering. Addressing this for linear time-invariant (LTI) systems is an essential fist step towards solving similar problems for more complex systems. In this paper, we develop a model-free reinforcement learning algorithm to compute stabilizing feedback gains for stochastic LTI systems with unknown system matrices. This algorithm proceeds by solving a series of discounted stochastic linear quadratic (SLQ) optimal control problems via policy iteration (PI). And the corresponding discount factor gradually decreases according to an explicit rule, which is derived from the equivalent condition in verifying the stabilizability. We prove that this method can return a stabilizer after finitely many steps. Finally, a numerical example is provided to illustrate the effectiveness of the proposed method.
The metric-adjusted skew information establishes a connection between the geometrical formulation of quantum statistics and the measures of quantum information. We study uncertainty relations in product and summation forms of metric-adjusted skew information. We present lower bounds on product and summation uncertainty inequalities based on metric-adjusted skew information via operator representation of observables. Explicit examples are provided to back our claims.
Several well-established benchmark predictors exist for Value-at-Risk (VaR), a major instrument for financial risk management. Hybrid methods combining AR-GARCH filtering with skewed-tt residuals and the extreme value theory-based approach are particularly recommended. This study introduces yet another VaR predictor, G-VaR, which follows a novel methodology. Inspired by the recent mathematical theory of sublinear expectation, G-VaR is built upon the concept of model uncertainty, which in the present case signifies that the inherent volatility of financial returns cannot be characterized by a single distribution but rather by infinitely many statistical distributions. By considering the worst scenario among these potential distributions, the G-VaR predictor is precisely identified. Extensive experiments on both the NASDAQ Composite Index and S\&P500 Index demonstrate the excellent performance of the G-VaR predictor, which is superior to most existing benchmark VaR predictors.
To achieve robustness of risk across different assets, risk parity investing rules, a particular state of risk contributions, have grown in popularity over the previous few decades. To generalize the concept of risk contribution from the simple covariance matrix case to the continuous-time case in which the terminal variance of wealth is used as the risk measure, we characterize risk contributions and marginal risk contributions on various assets as predictable processes using the Gateaux differential and Doleans measure. Meanwhile, the risk contributions we extend here have the aggregation property, namely that total risk can be represented as the aggregation of those among different assets and (t,ω)(t,\omega). Subsequently, as an inverse target -- allocating risk, the risk budgeting problem of how to obtain policies whose risk contributions coincide with pre-given risk budgets in the continuous-time case is also explored in this paper. These policies are solutions to stochastic convex optimizations parametrized by the pre-given risk budgets. Moreover, single-period risk budgeting policies are explained as the projection of risk budgeting policies in continuous-time cases. On the application side, volatility-managed portfolios in [Moreira and Muir,2017] can be obtained by risk budgeting optimization; similarly to previous findings, continuous-time mean-variance allocation in [Zhou and Li, 2000] appears to be concentrated in terms of risk contribution.
We develop a numerical method for simulation of incompressible viscous flows by integrating the technology of random vortex method with the core idea of Large Eddy Simulation (LES). Specifically, we utilize the filtering method in LES, interpreted as spatial averaging, along with the integral representation theorem for parabolic equations, to achieve a closure scheme which may be used for calculating solutions of Navier-Stokes equations. This approach circumvents the challenge associated with handling the non-locally integrable 3-dimensional integral kernel in the random vortex method and facilitates the computation of numerical solutions for flow systems via Monte-Carlo method. Numerical simulations are carried out for both laminar and turbulent flows, demonstrating the validity and effectiveness of the method.
The uncertainty principle is one of the fundamental features of quantum mechanics and plays a vital role in quantum information processing. We study uncertainty relations based on metric-adjusted skew information for finite quantum observables. Motivated by the paper [Physical Review A 104, 052414 (2021)], we establish tighter uncertainty relations in terms of different norm inequalities. Naturally, we generalize the method to uncertainty relations of metric-adjusted skew information for quantum channels and unitary operators. As both the Wigner-Yanase-Dyson skew information and the quantum Fisher information are the special cases of the metric-adjusted skew information corresponding to different Morozova-Chentsov functions, our results generalize some existing uncertainty relations. Detailed examples are given to illustrate the advantages of our methods.
In this paper, we first investigate the estimation of the empirical joint Laplace transform of volatilities of two semi-martingales within a fixed time interval [0, T] by using overlapped increments of high-frequency data. The proposed estimator is robust to the presence of finite variation jumps in price processes. The related functional central limit theorem for the proposed estimator has been established. Compared with the estimator with non-overlapped increments, the estimator with overlapped increments improves the asymptotic estimation efficiency. Moreover, we study the asymptotic theory of estimator under a long-span setting and employ it to create a feasible test for the dependence between volatilities. Finally, simulation and empirical studies demonstrate the performance of proposed estimators.
18 May 2022
In this paper, we study the numerical method for solving forward-backward stochastic differential equations driven by GG-Brownian motion (GG-FBSDEs) which correspond to fully nonlinear partial differential equations (PDEs). First, we give an approximate conditional GG-expectation and obtain feasible methods to calculate the distribution of GG-Brownian motion. On this basis, some efficient numerical schemes for GG-FBSDEs are then proposed. We rigorously analyze errors of the proposed schemes and prove the convergence results. Finally, several numerical experiments are given to demonstrate the accuracy of our method.
This paper proposes a new feature screening method for the multi-response ultrahigh dimensional linear model by empirical likelihood. Through a multivariate moment condition, the empirical likelihood induced ranking statistics can exploit the joint effect among responses, and thus result in a much better performance than the methods considering responses individually. More importantly, by the use of empirical likelihood, the new method adapts to the heterogeneity in the conditional variance of random error. The sure screening property of the newly proposed method is proved with the model size controlled within a reasonable scale. Additionally, the new screening method is also extended to a conditional version so that it can recover the hidden predictors which are easily missed by the unconditional method. The corresponding theoretical properties are also provided. Finally, both numerical studies and real data analysis are provided to illustrate the effectiveness of the proposed methods.
There are no more papers matching your filters at the moment.