econometrics
Difference-in-differences (DID) is one of the most popular tools used to evaluate causal effects of policy interventions. This paper extends the DID methodology to accommodate interval outcomes, which are often encountered in empirical studies using survey or administrative data. We point out that a naive application or extension of the conventional parallel trends assumption may yield uninformative or counterintuitive results, and present a suitable identification strategy, called parallel shifts, which exhibits desirable properties. Practical attractiveness of the proposed method is illustrated by revisiting an influential minimum wage study by Card and Krueger (1994).
There has been significant progress in Bayesian inference based on sparsity-inducing (e.g., spike-and-slab and horseshoe-type) priors for high-dimensional regression models. The resulting posteriors, however, in general do not possess desirable frequentist properties, and the credible sets thus cannot serve as valid confidence sets even asymptotically. We introduce a novel debiasing approach that corrects the bias for the entire Bayesian posterior distribution. We establish a new Bernstein-von Mises theorem that guarantees the frequentist validity of the debiased posterior. We demonstrate the practical performance of our proposal through Monte Carlo simulations and two empirical applications in economics.
We consider an adaptive experiment for treatment choice and design a minimax and Bayes optimal adaptive experiment with respect to regret. Given binary treatments, the experimenter's goal is to choose the treatment with the highest expected outcome through an adaptive experiment, in order to maximize welfare. We consider adaptive experiments that consist of two phases, the treatment allocation phase and the treatment choice phase. The experiment starts with the treatment allocation phase, where the experimenter allocates treatments to experimental subjects to gather observations. During this phase, the experimenter can adaptively update the allocation probabilities using the observations obtained in the experiment. After the allocation phase, the experimenter proceeds to the treatment choice phase, where one of the treatments is selected as the best. For this adaptive experimental procedure, we propose an adaptive experiment that splits the treatment allocation phase into two stages, where we first estimate the standard deviations and then allocate each treatment proportionally to its standard deviation. We show that this experiment, often referred to as Neyman allocation, is minimax and Bayes optimal in the sense that its regret upper bounds exactly match the lower bounds that we derive. To show this optimality, we derive minimax and Bayes lower bounds for the regret using change-of-measure arguments. Then, we evaluate the corresponding upper bounds using the central limit theorem and large deviation bounds.
\citet{farrell2021deep} establish non-asymptotic high-probability bounds for general deep feedforward neural network (with rectified linear unit activation function) estimators, with \citet[Theorem 1]{farrell2021deep} achieving a suboptimal convergence rate for fully connected feedforward networks. The authors suggest that improved approximation of fully connected networks could yield sharper versions of \citet[Theorem 1]{farrell2021deep} without altering the theoretical framework. By deriving approximation bounds specifically for a narrower fully connected deep neural network, this note demonstrates that \citet[Theorem 1]{farrell2021deep} can be improved to achieve an optimal rate (up to a logarithmic factor). Furthermore, this note briefly shows that deep neural network estimators can mitigate the curse of dimensionality for functions with compositional structure and functions defined on manifolds.
We develop a theoretical framework for sample splitting in A/B testing environments, where data for each test are partitioned into two splits to measure methodological performance when the true impacts of tests are unobserved. We show that sample-split estimators are generally biased for full-sample performance but consistently estimate sample-split analogues of it. We derive their asymptotic distributions, construct valid confidence intervals, and characterize the bias-variance trade-offs underlying sample-split design choices. We validate our theoretical results through simulations and provide implementation guidance for A/B testing products seeking to evaluate new estimators and decision rules.
We propose a new formulation of the maximum score estimator that uses compositions of rectified linear unit (ReLU) functions, instead of indicator functions as in Manski (1975,1985), to encode the sign alignment restrictions. Since the ReLU function is Lipschitz, our new ReLU-based maximum score criterion function is substantially easier to optimize using standard gradient-based optimization pacakges. We also show that our ReLU-based maximum score (RMS) estimator can be generalized to an umbrella framework defined by multi-index single-crossing (MISC) conditions, while the original maximum score estimator cannot be applied. We establish the ns/(2s+1)n^{-s/(2s+1)} convergence rate and asymptotic normality for the RMS estimator under order-ss Holder smoothness. In addition, we propose an alternative estimator using a further reformulation of RMS as a special layer in a deep neural network (DNN) architecture, which allows the estimation procedure to be implemented via state-of-the-art software and hardware for DNN.
Timely assessment of current conditions is essential especially for small, open economies such as Singapore, where external shocks transmit rapidly to domestic activity. We develop a real-time nowcasting framework for quarterly GDP growth using a high-dimensional panel of approximately 70 indicators, encompassing economic and financial indicators over 1990Q1-2023Q2. The analysis covers penalized regressions, dimensionality-reduction methods, ensemble learning algorithms, and neural architectures, benchmarked against a Random Walk, an AR(3), and a Dynamic Factor Model. The pipeline preserves temporal ordering through an expanding-window walk-forward design with Bayesian hyperparameter optimization, and uses moving block-bootstrap procedures both to construct prediction intervals and to obtain confidence bands for feature-importance measures. It adopts model-specific and XAI-based explainability tools. A Model Confidence Set procedure identifies statistically superior learners, which are then combined through simple, weighted, and exponentially weighted schemes; the resulting time-varying weights provide an interpretable representation of model contributions. Predictive ability is assessed via Giacomini-White tests. Empirical results show that penalized regressions, dimensionality-reduction models, and GRU networks consistently outperform all benchmarks, with RMSFE reductions of roughly 40-60%; aggregation delivers further gains. Feature-attribution methods highlight industrial production, external trade, and labor-market indicators as dominant drivers of Singapore's short-run growth dynamics.
Foundation models, and in particular large language models, can generate highly informative responses, prompting growing interest in using these ''synthetic'' outputs as data in empirical research and decision-making. This paper introduces the idea of a foundation prior, which shows that model-generated outputs are not as real observations, but draws from the foundation prior induced prior predictive distribution. As such synthetic data reflects both the model's learned patterns and the user's subjective priors, expectations, and biases. We model the subjectivity of the generative process by making explicit the dependence of synthetic outputs on the user's anticipated data distribution, the prompt-engineering process, and the trust placed in the foundation model. We derive the foundation prior as an exponential-tilted, generalized Bayesian update of the user's primitive prior, where a trust parameter governs the weight assigned to synthetic data. We then show how synthetic data and the associated foundation prior can be incorporated into standard statistical and econometric workflows, and discuss their use in applications such as refining complex models, informing latent constructs, guiding experimental design, and augmenting random-coefficient and partially linear specifications. By treating generative outputs as structured, explicitly subjective priors rather than as empirical observations, the framework offers a principled way to harness foundation models in empirical work while avoiding the conflation of synthetic ''facts'' with real data.
218
This article introduces a leave-one-out regression adjustment estimator (LOORA) for estimating average treatment effects in randomized controlled trials. The method removes the finite-sample bias of conventional regression adjustment and provides exact variance expressions for LOORA versions of the Horvitz-Thompson and difference-in-means estimators under simple and complete random assignment. Ridge regularization limits the influence of high-leverage observations, improving stability and precision in small samples. In large samples, LOORA attains the asymptotic efficiency of regression-adjusted estimator as characterized by Lin (2013, Annals of Applied Statistics), while remaining exactly unbiased. To construct confidence intervals, we rely on asymptotic variance estimates that treat the estimator as a two-step procedure, accounting for both the regression adjustment and the random assignment stages. Two within-subject experimental applications that provide realistic joint distributions of potential outcomes as ground truth show that LOORA eliminates substantial biases and achieves close-to-nominal confidence interval coverage.
In random matrix theory, the spectral distribution of the covariance matrix has been well studied under the large dimensional asymptotic regime when the dimensionality and the sample size tend to infinity at the same rate. However, most existing theories are built upon the assumption of independent and identically distributed samples, which may be violated in practice. For example, the observational data of continuous-time processes at discrete time points, namely, the high-frequency data. In this paper, we extend the classical spectral analysis for the covariance matrix in large dimensional random matrix to the spot volatility matrix by using the high-frequency data. We establish the first-order limiting spectral distribution and obtain a second-order result, that is, the central limit theorem for linear spectral statistics. Moreover, we apply the results to design some feasible tests for the spot volatility matrix, including the identity and sphericity tests. Simulation studies justify the finite sample performance of the test statistics and verify our established theory.
A method called Cross-Validated Causal Inference (CVCI) combines experimental and observational datasets to derive robust and precise average treatment effect estimates. Researchers from UC Berkeley and Stanford developed this approach, which adaptively weights the contributions of each data source through cross-validation on experimental data.
Random Utility Models (RUMs) are a classical framework for modeling user preferences and play a key role in reward modeling for Reinforcement Learning from Human Feedback (RLHF). However, a crucial shortcoming of many of these techniques is the Independence of Irrelevant Alternatives (IIA) assumption, which collapses \emph{all} human preferences to a universal underlying utility function, yielding a coarse approximation of the range of human preferences. On the other hand, statistical and computational guarantees for models avoiding this assumption are scarce. In this paper, we investigate the statistical and computational challenges of learning a \emph{correlated} probit model, a fundamental RUM that avoids the IIA assumption. First, we establish that the classical data collection paradigm of pairwise preference data is \emph{fundamentally insufficient} to learn correlational information, explaining the lack of statistical and computational guarantees in this setting. Next, we demonstrate that \emph{best-of-three} preference data provably overcomes these shortcomings, and devise a statistically and computationally efficient estimator with near-optimal performance. These results highlight the benefits of higher-order preference data in learning correlated utilities, allowing for more fine-grained modeling of human preferences. Finally, we validate these theoretical guarantees on several real-world datasets, demonstrating improved personalization of human preferences.
This paper develops a sensitivity analysis framework that transfers the average total treatment effect (ATTE) from source data with a fully observed network to target data whose network is completely unknown. The ATTE represents the average social impact of a policy that assigns the treatment to every individual in the dataset. We postulate a covariate-shift type assumption that both source and target datasets share the same conditional mean outcome. However, because the target network is unobserved, this assumption alone is not sufficient to pin down the ATTE for the target data. To address this issue, we consider a sensitivity analysis based on the uncertainty of the target network's degree distribution, where the extent of uncertainty is measured by the Wasserstein distance from a given reference degree distribution. We then construct bounds on the target ATTE using a linear programming-based estimator. The limiting distribution of the bound estimator is derived via the functional delta method, and we develop a wild bootstrap approach to approximate the distribution. As an empirical illustration, we revisit the social network experiment on farmers' weather insurance adoption in China by Cai et al. (2015).
The Difference-in-Differences (DiD) method is a fundamental tool for causal inference, yet its application is often complicated by missing data. Although recent work has developed robust DiD estimators for complex settings like staggered treatment adoption, these methods typically assume complete data and fail to address the critical challenge of outcomes that are missing at random (MAR) -- a common problem that invalidates standard estimators. We develop a rigorous framework, rooted in semiparametric theory, for identifying and efficiently estimating the Average Treatment Effect on the Treated (ATT) when either pre- or post-treatment (or both) outcomes are missing at random. We first establish nonparametric identification of the ATT under two minimal sets of sufficient conditions. For each, we derive the semiparametric efficiency bound, which provides a formal benchmark for asymptotic optimality. We then propose novel estimators that are asymptotically efficient, achieving this theoretical bound. A key feature of our estimators is their multiple robustness, which ensures consistency even if some nuisance function models are misspecified. We validate the properties of our estimators and showcase their broad applicability through an extensive simulation study.
Recursive decision trees have emerged as a leading methodology for heterogeneous causal treatment effect estimation and inference in experimental and observational settings. These procedures are fitted using the celebrated CART (Classification And Regression Tree) algorithm [Breiman et al., 1984], or custom variants thereof, and hence are believed to be "adaptive" to high-dimensional data, sparsity, or other specific features of the underlying data generating process. Athey and Imbens [2016] proposed several "honest" causal decision tree estimators, which have become the standard in both academia and industry. We study their estimators, and variants thereof, and establish lower bounds on their estimation error. We demonstrate that these popular heterogeneous treatment effect estimators cannot achieve a polynomial-in-nn convergence rate under basic conditions, where nn denotes the sample size. Contrary to common belief, honesty does not resolve these limitations and at best delivers negligible logarithmic improvements in sample size or dimension. As a result, these commonly used estimators can exhibit poor performance in practice, and even be inconsistent in some settings. Our theoretical insights are empirically validated through simulations.
A decision rule is epsilon-minimax if it is minimax up to an additive factor epsilon. We present an algorithm for provably obtaining epsilon-minimax solutions of statistical decision problems. We are interested in problems where the statistician chooses randomly among I decision rules. The minimax solution of these problems admits a convex programming representation over the (I-1)-simplex. Our suggested algorithm is a well-known mirror subgradient descent routine, designed to approximately solve the convex optimization problem that defines the minimax decision rule. This iterative routine is known in the computer science literature as the hedge algorithm and it is used in algorithmic game theory as a practical tool to find approximate solutions of two-person zero-sum games. We apply the suggested algorithm to different minimax problems in the econometrics literature. An empirical application to the problem of optimally selecting sites to maximize the external validity of an experimental policy evaluation illustrates the usefulness of the suggested procedure.
This paper introduces a novel concept of impulse response decomposition to disentangle the dynamic contributions of the mediator variables in the transmission of structural shocks. We justify our decomposition by drawing on causal mediation analysis and demonstrating its equivalence to the average mediation effect. Our result establishes a formal link between Sims and Granger causality. Sims causality captures the total effect, while Granger causality corresponds to the mediation effect. We construct a dynamic mediation index that quantifies the evolving role of mediator variables in shock propagation. Applying our framework to studies of the transmission channels of US monetary policy, we find that investor sentiment explains approximately 60% of the peak aggregate output response in three months following a policy shock, while expected default risk contributes negligibly across all horizons.
We introduce prototype consistent model-free, dense precision matrix estimators that have broad application in economics. Using quadratic form concentration inequalities and novel algebraic characterizations of confounding dimension reductions, we are able to: (i) obtain non-asymptotic bounds for precision matrix estimation errors and also (ii) consistency in high dimensions; (iii) uncover the existence of an intrinsic signal-to-noise -- underlying dimensions tradeoff; and (iv) avoid exact population sparsity assumptions. In addition to its desirable theoretical properties, a thorough empirical study of the S&P 500 index shows that a tuning parameter-free special case of our general estimator exhibits a doubly ascending Sharpe Ratio pattern, thereby establishing a link with the famous double descent phenomenon dominantly present in recent statistical and machine learning literature.
In this study, I investigate the dynamic decision problem with a finite parameter space when the functional form of conditional expected rewards is misspecified. Traditional algorithms, such as Thompson Sampling, guarantee neither an O(eT)O(e^{-T}) rate of posterior parameter concentration nor an O(T1)O(T^{-1}) rate of average regret. However, under mild conditions, we can still achieve an exponential convergence rate of the parameter to a pseudo truth set, an extension of the pseudo truth parameter concept introduced by White (1982). I further characterize the necessary conditions for the convergence of the expected posterior within this pseudo-truth set. Simulations demonstrate that while the maximum a posteriori (MAP) estimate of the parameters fails to converge under misspecification, the algorithm's average regret remains relatively robust compared to the correctly specified case. These findings suggest opportunities to design simple yet robust algorithms that achieve desirable outcomes even in the presence of model misspecifications.
I show that ordinary least squares (OLS) predictions can be rewritten as the output of a restricted attention module, akin to those forming the backbone of large language models. This connection offers an alternative perspective on attention beyond the conventional information retrieval framework, making it more accessible to researchers and analysts with a background in traditional statistics. It falls into place when OLS is framed as a similarity-based method in a transformed regressor space, distinct from the standard view based on partial correlations. In fact, the OLS solution can be recast as the outcome of an alternative problem: minimizing squared prediction errors by optimizing the embedding space in which training and test vectors are compared via inner products. Rather than estimating coefficients directly, we equivalently learn optimal encoding and decoding operations for predictors. From this vantage point, OLS maps naturally onto the query-key-value structure of attention mechanisms. Building on this foundation, I discuss key elements of Transformer-style attention and draw connections to classic ideas from time series econometrics.
There are no more papers matching your filters at the moment.