theoretical-economics
When customers must visit a seller to learn the valuation of its product, sellers potentially benefit from charging a lower price on the first visit and a higher price when a buyer returns. Armstrong and Zhou (2016) show that such price discrimination can arise in equilibrium when buyers learn a seller's pricing policy only upon visiting. We depart from this assumption by supposing that sellers commit to observable pricing policies that guide consumer search and buyers can choose whom to visit first. We show that no seller engages in price discrimination in equilibrium.
Standard decision theory seeks conditions under which a preference relation can be compressed into a single real-valued function. However, when preferences are incomplete or intransitive, a single function fails to capture the agent's evaluative structure. Recent literature on multi-utility representations suggests that such preferences are better represented by families of functions. This paper provides a canonical and intrinsic geometric characterization of this family. We construct the \textit{ledger group} U(P)U(P), a partially ordered group that faithfully encodes the native structure of the agent's preferences in terms of trade-offs. We show that the set of all admissible utility functions is precisely the \textit{dual cone} UU^* of this structure. This perspective shifts the focus of utility theory from the existence of a specific map to the geometry of the measurement space itself. We demonstrate the power of this framework by explicitly reconstructing the standard multi-attribute utility representation as the intersection of the abstract dual cone with a subspace of continuous functionals, and showing the impossibility of this for a set of lexicographic preferences.
This paper develops the Theory of Strategic Evolution, a general model for systems in which the population of players, strategies, and institutional rules evolve together. The theory extends replicator dynamics to settings with endogenous players, multi level selection, innovation, constitutional change, and meta governance. The central mathematical object is a Poiesis stack: a hierarchy of strategic layers linked by cross level gain matrices. Under small gain conditions, the system admits a global Lyapunov function and satisfies selection, tracking, and stochastic stability results at every finite depth. We prove that the class is closed under block extension, innovation events, heterogeneous utilities, continuous strategy spaces, and constitutional evolution. The closure theorem shows that no new dynamics arise at higher levels and that unrestricted self modification cannot preserve Lyapunov structure. The theory unifies results from evolutionary game theory, institutional design, innovation dynamics, and constitutional political economy, providing a general mathematical model of long run strategic adaptation.
I relax the standard assumptions of transitivity and partition structure in economic models of information to formalize vague knowledge: non-transitive indistinguishability over states. I show that vague knowledge, while failing to partition the state space, remains informative by distinguishing some states from others. Moreover, it can only be faithfully expressed through vague communication with blurred boundaries. My results provide microfoundations for the prevalence of natural language communication and qualitative reasoning in the real world, where knowledge is often vague.
The usual definitions of algorithmic fairness focus on population-level statistics, such as demographic parity or equal opportunity. However, in many social or economic contexts, fairness is not perceived globally, but locally, through an individual's peer network and comparisons. We propose a theoretical model of perceived fairness networks, in which each individual's sense of discrimination depends on the local topology of interactions. We show that even if a decision rule satisfies standard criteria of fairness, perceived discrimination can persist or even increase in the presence of homophily or assortative mixing. We propose a formalism for the concept of fairness perception, linking network structure, local observation, and social perception. Analytical and simulation results highlight how network topology affects the divergence between objective fairness and perceived fairness, with implications for algorithmic governance and applications in finance and collaborative insurance.
If people find it costly to evaluate the options available to them, their choices may not directly reveal their preferences. Yet, it is conceivable that a researcher can still learn about a population's preferences with careful experiment design. We formalize the researcher's problem in a model of robust mechanism design where it is costly for individuals to learn about how much they value a product. We characterize the statistics that the researcher can identify, and find that they are quite restricted. Finally, we apply our positive results to social choice and propose a way to combat uninformed voting.
Researchers from Peking University and the University of Pennsylvania rigorously demonstrate that reward models used in LLM alignment are statistically unlikely to capture diverse human preferences due to the prevalence of Condorcet cycles. They propose Nash Learning from Human Feedback (NLHF) as an alternative that naturally preserves preference diversity, and introduce Nash Rejection Sampling (Nash-RS), an efficient algorithm achieving a 60.55% win rate against a base model.
When comparing performance (of products, services, entities, etc.), multiple attributes are involved. This paper deals with a way of weighting these attributes when one is seeking an overall score. It presents an objective approach to generating the weights in a scoring formula which avoids personal judgement. The first step is to find the maximum possible score for each assessed entity. These upper bound scores are found using Data Envelopment Analysis. In the second step the weights in the scoring formula are found by regressing the unique DEA scores on the attribute data. Reasons for using least squares and avoiding other distance measures are given. The method is tested on data where the true scores and weights are known. The method enables the construction of an objective scoring formula which has been generated from the data arising from all assessed entities and is, in that sense, democratic.
We analyze the dynamic tradeoff between generating and disclosing evidence. Agents are tempted to delay investing in a new technology in order to learn from information generated by the experiences of others. This informational free-riding is collectively harmful as it slows down learning and innovation adoption. A welfare-maximizing designer can delay the disclosure of previously generated information in order to speed up adoption. The optimal policy transparently discloses bad news and delays good news. This finding resonates with regulation demanding that fatal breakdowns be reported promptly. The designer's intervention makes all agents better off.
We examine how monetary shocks spread throughout an economic model characterized by sticky prices and general equilibrium, where the pricing strategies of firms are interlinked, fostering a mutually beneficial relationship. In this dynamic equilibrium, pricing choices of firms are influenced by overall economic factors, which are themselves affected by these decisions. We approach this situation using a path integral control method, yielding several important insights. We confirm the presence and uniqueness of the equilibrium and scrutinize the impulse response function (IRF) of output subsequent to a shock affecting the entire economy.
In robust decision-making under non-Bayesian uncertainty, different robust optimization criteria, such as maximin performance, minimax regret, and maximin ratio, have been proposed. In many problems, all three criteria are well-motivated and well-grounded from a decision-theoretic perspective, yet different criteria give different prescriptions. This paper initiates a systematic study of overfitting to robustness criteria. How good is a prescription derived from one criterion when evaluated against another criterion? Does there exist a prescription that performs well against all criteria of interest? We formalize and study these questions through the prototypical problem of robust pricing under various information structures, including support, moments, and percentiles of the distribution of values. We provide a unified analysis of three focal robust criteria across various information structures and evaluate the relative performance of mechanisms optimized for each criterion against the others. We find that mechanisms optimized for one criterion often perform poorly against other criteria, highlighting the risk of overfitting to a particular robustness criterion. Remarkably, we show it is possible to design mechanisms that achieve good performance across all three criteria simultaneously, suggesting that decision-makers need not compromise among criteria.
I extend the concept of absorptive capacity, used in the analysis of firms, to a framework applicable to the national level. First, employing confirmatory factor analyses on 47 variables, I build 13 composite factors crucial to measuring six national level capacities: technological capacity, financial capacity, human capacity, infrastructural capacity, public policy capacity, and social capacity. My data cover most low- and middle-income- economies (LMICs), eligible for the World Bank's International Development Association (IDA) support between 2005 and 2019. Second, I analyze the relationship between the estimated capacity factors and economic growth while controlling for some of the incoming flows from abroad and other confounders that might influence the relationship. Lastly, I conduct K-means cluster analysis and then analyze the results alongside regression estimates to glean patterns and classifications within the LMICs. Results indicate that enhancing infrastructure (ICT, energy, trade, and transport), financial (apparatus and environment), and public policy capacities is a prerequisite for attaining economic growth. Similarly, I find improving human capital with specialized skills positively impacts economic growth. Finally, by providing a ranking of which capacity is empirically more important for economic growth, I offer suggestions to governments with limited budgets to make wise investments. Likewise, my findings inform international policy and monetary bodies on how they could better channel their funding in LMICs to achieve sustainable development goals and boost shared prosperity.
For an ascending correspondence F:X2XF:X\to 2^X with chain-complete values on a complete lattice XX, we prove that the set of fixed points is a complete lattice. This strengthens Zhou's fixed point theorem. For chain-complete posets that are not necessarily lattices, we generalize the Abian-Brown and the Markowsky fixed point theorems from single-valued maps to multivalued correspondences. We provide an application in game theory.
Standard federated learning (FL) approaches are vulnerable to the free-rider dilemma: participating agents can contribute little to nothing yet receive a well-trained aggregated model. While prior mechanisms attempt to solve the free-rider dilemma, none have addressed the issue of truthfulness. In practice, adversarial agents can provide false information to the server in order to cheat its way out of contributing to federated training. In an effort to make free-riding-averse federated mechanisms truthful, and consequently less prone to breaking down in practice, we propose FACT. FACT is the first federated mechanism that: (1) eliminates federated free riding by using a penalty system, (2) ensures agents provide truthful information by creating a competitive environment, and (3) encourages agent participation by offering better performance than training alone. Empirically, FACT avoids free-riding when agents are untruthful, and reduces agent loss by over 4x.
We study strategic information transmission in a hierarchical setting where information gets transmitted through a chain of agents up to a decision maker whose action is of importance to every agent. This situation could arise whenever an agent can communicate to the decision maker only through a chain of intermediaries, for example, an entry-level worker and the CEO in a firm, or an official in the bottom of the chain of command and the president in a government. Each agent can decide to conceal part or all the information she receives. Proving we can focus on simple equilibria, where the only player who conceals information is the first one, we provide a tractable recursive characterization of the equilibrium outcome, and show that it could be inefficient. Interestingly, in the binary-action case, regardless of the number of intermediaries, there are a few pivotal ones who determine the amount of information communicated to the decision maker. In this case, our results underscore the importance of choosing a pivotal vice president for maximizing the payoff of the CEO or president.
We show that the expectation of the kthk^{\mathrm{th}}-order statistic of an i.i.d. sample of size nn from a monotone reverse hazard rate (MRHR) distribution is convex in nn and that the expectation of the (nk+1)th(n-k+1)^{\mathrm{th}}-order statistic from a monotone hazard rate (MHR) distribution is concave in nn for nkn\ge k. We apply this result to the analysis of independent private value auctions in which the auctioneer faces a convex cost of attracting bidders. In this setting, MHR valuation distributions lead to concavity of the auctioneer's objective. We extend this analysis to auctions with reserve values, in which concavity is assured for sufficiently small reserves or for a sufficiently large number of bidders.
This paper uncovers tight bounds on the number of preferences permissible in identified random utility models. We show that as the number of alternatives in a discrete choice model becomes large, the fraction of preferences admissible in an identified model rapidly tends to zero. We propose a novel sufficient condition ensuring identification, which is strictly weaker than some of those existing in the literature. While this sufficient condition reaches our upper bound, an example demonstrates that this condition is not necessary for identification. Using our new condition, we show that the classic ``Latin Square" example from social choice theory is identified from stochastic choice data.
We present novel monotone comparative statics results for steady-state behavior in a dynamic optimization environment with misspecified Bayesian learning. Building on \cite{ep21a}, we analyze a Bayesian learner whose prior is over parameterized transition models but is misspecified in the sense that the true process does not belong to this set. We characterize conditions that ensure monotonicity in the steady-state distribution over states, actions, and inferred models. Additionally, we provide a new monotonicity-based proof of steady-state existence, derive an upper bound on the cost of misspecification, and illustrate the applicability of our results to several environments of general interest.
We develop a full-fledged analysis of an algorithmic decision process that, in a multialternative choice problem, produces computable choice probabilities and expected decision times.
There are no more papers matching your filters at the moment.