Paris School of Economics
We study a simple model of algorithmic collusion in which Q-learning algorithms are designed in a strategic fashion. We let players (\textit{designers}) choose their exploration policy simultaneously prior to letting their algorithms repeatedly play a prisoner's dilemma. We prove that, in equilibrium, collusive behavior is reached with positive probability. Our numerical simulations indicate symmetry of the equilibria and give insight for how they are affected by a parameter of interest. We also investigate general profiles of exploration policies. We characterize the behavior of the system for extreme profiles (fully greedy and fully explorative) and use numerical simulations and clustering methods to measure the likelihood of collusive behavior in general cases.
Approximating time-varying unobserved heterogeneity by discrete types has become increasingly popular in economics. Yet, provably valid post-clustering inference for target parameters in models that do not impose an exact group structure is still lacking. This paper fills this gap in the leading case of a linear panel data model with nonseparable two-way unobserved heterogeneity. Building on insights from the double machine learning literature, we propose a simple inference procedure based on a bias-reducing moment. Asymptotic theory and simulations suggest excellent performance. In the application on fiscal policy we revisit, the novel approach yields conclusions in line with economic theory.
Researchers from the University of Liechtenstein, ETH Zurich, and Paris School of Economics developed an NLP pipeline to analyze over 10 million items from "first-hand" cybercriminal sources. Their work revealed that 20.48% of this content is relevant to Cyber Threat Intelligence, demonstrating distinct operational focuses across darknet websites, underground forums, and chat channels.
How does the progressive embracement of Large Language Models (LLMs) affect scientific peer reviewing? This multifaceted question is fundamental to the effectiveness -- as well as to the integrity -- of the scientific process. Recent evidence suggests that LLMs may have already been tacitly used in peer reviewing, e.g., at the 2024 International Conference of Learning Representations (ICLR). Furthermore, some efforts have been undertaken in an attempt to explicitly integrate LLMs in peer reviewing by various editorial boards (including that of ICLR'25). To fully understand the utility and the implications of LLMs' deployment for scientific reviewing, a comprehensive relevant dataset is strongly desirable. Despite some previous research on this topic, such dataset has been lacking so far. We fill in this gap by presenting GenReview, the hitherto largest dataset containing LLM-written reviews. Our dataset includes 81K reviews generated for all submissions to the 2018--2025 editions of the ICLR by providing the LLM with three independent prompts: a negative, a positive, and a neutral one. GenReview is also linked to the respective papers and their original reviews, thereby enabling a broad range of investigations. To illustrate the value of GenReview, we explore a sample of intriguing research questions, namely: if LLMs exhibit bias in reviewing (they do); if LLM-written reviews can be automatically detected (so far, they can); if LLMs can rigorously follow reviewing instructions (not always) and whether LLM-provided ratings align with decisions on paper acceptance or rejection (holds true only for accepted papers). GenReview can be accessed at the following link: this https URL.
This paper examines the impact of temperature shocks on European Parliament elections. We combine high-resolution climate data with results from parliamentary elections between 1989 and 2019, aggregated at the NUTS-2 regional level. Exploiting exogenous variation in unusually warm and hot days during the months preceding elections, we identify the effect of short-run temperature shocks on voting behaviour. We find that temperature shocks reduce ideological polarisation and increase vote concentration, as voters consolidate around larger, more moderate parties. This aggregated pattern is explained by a gain in support of liberal and, to a lesser extent, social democratic parties, while right-wing parties lose vote share. Consistent with a salience mechanism, complementary analysis of party manifestos shows greater emphasis on climate-related issues in warmer pre-electoral contexts. Overall, our findings indicate that climate shocks can shift party systems toward the centre and weaken political extremes.
This study explores the link between the capital share and income inequality over the past four decades across 56 countries. Calculating the capital share from national accounts alongside top income share data from the World Inequality Database, which is based on the Distributional National Accounts methodology, we ensure the consistency in the theory and measurement. Employing a structural econometric approach, we account for heterogeneous and time-varying transmission coefficients from the capital share to personal income inequality. Our findings reveal that a one percentage point (pp) increase in the capital share raises the income share of the top 5% by 0.17 pp on average. Advanced economies show a stable transmission coefficient with rising capital and labor income inequality, while emerging economies experience an increasing transmission coefficient alongside growing capital income inequality. In contrast, a third group exhibits a declining transmission coefficient and rising labor income inequality. Overall, changes in the capital share account for approximately 50% of the rise in income inequality, underscoring its pivotal role over the last four decades.
We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014--2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns.
This paper presents a complete processing workflow for extracting information from French census lists from 1836 to 1936. These lists contain information about individuals living in France and their households. We aim at extracting all the information contained in these tables using automatic handwritten table recognition. At the end of the Socface project, in which our work is taking place, the extracted information will be redistributed to the departmental archives, and the nominative lists will be freely available to the public, allowing anyone to browse hundreds of millions of records. The extracted data will be used by demographers to analyze social change over time, significantly improving our understanding of French economic and social structures. For this project, we developed a complete processing workflow: large-scale data collection from French departmental archives, collaborative annotation of documents, training of handwritten table text and structure recognition models, and mass processing of millions of images. We present the tools we have developed to easily collect and process millions of pages. We also show that it is possible to process such a wide variety of tables with a single table recognition model that uses the image of the entire page to recognize information about individuals, categorize them and automatically group them into households. The entire process has been successfully used to process the documents of a departmental archive, representing more than 450,000 images.
We propose difference-in-differences (DID) estimators in designs where the treatment is continuously distributed in every period, as is often the case when one studies the effects of taxes, tariffs, or prices. We assume that between consecutive periods, the treatment of some units, the switchers, changes, while the treatment of other units, the stayers, remains constant. We show that under a parallel-trends assumption, weighted averages of the slopes of switchers' potential outcomes are nonparametrically identified by difference-in-differences estimands comparing the outcome evolutions of switchers and stayers with the same baseline treatment. Controlling for the baseline treatment ensures that our estimands remain valid if the treatment's effect changes over time. We highlight two possible ways of weighting switcher's slopes, and discuss their respective advantages. For each weighted average of slopes, we propose a doubly-robust, nonparametric, n\sqrt{n}-consistent, and asymptotically normal estimator. We generalize our results to the instrumental-variable case. Finally, we apply our method to estimate the price-elasticity of gasoline consumption.
A number of macroeconomic theories, very popular in the 1980s, seem to have completely disappeared and been replaced by the dynamic stochastic general equilibrium (DSGE) approach. We will argue that this replacement is due to a tacit agreement on a number of assumptions, previously seen as mutually exclusive, and not due to a settlement by 'nature'. As opposed to econometrics and microeconomics and despite massive progress in the access to data and the use of statistical software, macroeconomic theory appears not to be a cumulative science so far. Observational equivalence of different models and the problem of identification of parameters of the models persist as will be highlighted by examining two examples: one in growth theory and a second in testing inflation persistence.
Minimal balanced collections are a generalization of partitions of a finite set of n elements and have important applications in cooperative game theory and discrete mathematics. However, their number is not known beyond n = 4. In this paper we investigate the problem of generating minimal balanced collections and implement the Peleg algorithm, permitting to generate all minimal balanced collections till n = 7. Secondly, we provide practical algorithms to check many properties of coalitions and games, based on minimal balanced collections, in a way which is faster than linear programming-based methods. In particular, we construct an algorithm to check if the core of a cooperative game is a stable set in the sense of von Neumann and Morgenstern. The algorithm implements a theorem according to which the core is a stable set if and only if a certain nested balancedness condition is valid. The second level of this condition requires generalizing the notion of balanced collection to balanced sets.
We propose and study a model of strategic network design and exploration where the hider, subject to a budget constraint restricting the number of links, chooses a connected network and the location of an object. Meanwhile, the seeker, not observing the network and the location of the object, chooses a network exploration strategy starting at a fixed node in the network. The network exploration follows the expanding search paradigm of Alpern and Lidbetter (2013). We obtain a Nash equilibrium and characterize equilibrium payoffs in the case of linking budget allowing for trees only. We also give an upper bound on the expected number of steps needed to find the hider for the case where the linking budget allows for at most one cycle in the network.
We consider multi-population Bayesian games with a large number of players. Each player aims at minimizing a cost function that depends on this player's own action, the distribution of players' actions in all populations, and an unknown state parameter. We study the nonatomic limit versions of these games and introduce the concept of Bayes correlated Wardrop equilibrium, which extends the concept of Bayes correlated equilibrium to nonatomic games. We prove that Bayes correlated Wardrop equilibria are limits of action flows induced by Bayes correlated equilibria of the game with a large finite set of small players. For nonatomic games with complete information admitting a convex potential, we prove that the set of correlated and of coarse correlated Wardrop equilibria coincide with the set of probability distributions over Wardrop equilibria, and that all equilibrium outcomes have the same costs. We get the following consequences. First, all flow distributions of (coarse) correlated equilibria in convex potential games with finitely many players converge to Wardrop equilibria when the weight of each player tends to zero. Second, for any sequence of flows satisfying a no-regret property, its empirical distribution converges to the set of distributions over Wardrop equilibria and the average cost converges to the unique Wardrop cost.
We investigate the potential of deliberation to create consensus among fully-informed citizens. Our approach relies on two cognitive assumptions: i. citizens need a thinking frame (or perspective) to consider an issue; and ii. citizens cannot consider all relevant perspectives simultaneously: they are incompatible in the mind. These assumptions imply that opinions are intrinsically contextual. Formally, we capture contextuality in a simple quantum-like cognitive model. We consider a binary voting problem, in which two citizens with incompatible thinking frames and initially opposite voting intentions deliberate under the guidance of a benevolent facilitator. We find that when citizens consider alternative perspectives, their opinion may change. When the citizens' perspectives are two-dimensional and maximally uncorrelated, the probability for consensus after two rounds of deliberation reaches 75%; and this probability increases proportionally with the dimensionality (namely, the richness) of the perspectives. When dealing with a population of citizens, we also elaborate a novel rationale for working in subgroups. The contextuality approach delivers a number of insights. First, the diversity of perspectives is beneficial and even necessary for deliberations to overcome initial disagreement. Second, successful deliberation demand the active participation of citizens in terms of "putting themselves in the other's shoes". Third, well-designed procedures managed by a facilitator are necessary to secure increased probability for consensus. A last insight is that the richness of citizens' thinking frames is beneficial, while the optimal strategy entails focusing deliberation on a properly reduced problem.
This paper investigates the identification, the determinacy and the stability of ad hoc, "quasi-optimal" and optimal policy rules augmented with financial stability indicators (such as asset prices deviations from their fundamental values) and minimizing the volatility of the policy interest rates, when the central bank precommits to financial stability. Firstly, ad hoc and quasi-optimal rules parameters of financial stability indicators cannot be identified. For those rules, non zero policy rule parameters of financial stability indicators are observationally equivalent to rule parameters set to zero in another rule, so that they are unable to inform monetary policy. Secondly, under controllability conditions, optimal policy rules parameters of financial stability indicators can all be identified, along with a bounded solution stabilizing an unstable economy as in Woodford (2003), with determinacy of the initial conditions of non- predetermined variables.
We study a stochastic model of anonymous influence with conformist and anti-conformist individuals. Each agent with a `yes' or `no' initial opinion on a certain issue can change his opinion due to social influence. We consider anonymous influence, which depends on the number of agents having a certain opinion, but not on their identity. An individual is conformist/anti-conformist if his probability of saying `yes' increases/decreases with the number of `yes'-agents. We focus on three classes of aggregation rules (pure conformism, pure anti-conformism, and mixed aggregation rules) and examine two types of society (without, and with mixed agents). For both types we provide a complete qualitative analysis of convergence, i.e., identify all absorbing classes and conditions for their occurrence. Also the pure case with infinitely many individuals is studied. We show that, as expected, the presence of anti-conformists in a society brings polarization and instability: polarization in two groups, fuzzy polarization (i.e., with blurred frontiers), cycles, periodic classes, as well as more or less chaotic situations where at any time step the set of `yes'-agents can be any subset of the society. Surprisingly, the presence of anti-conformists may also lead to opinion reversal: a majority group of conformists with a stable opinion can evolve by a cascade phenomenon towards the opposite opinion, and remains in this state.
We study the optimal method for rationing scarce resources through a queue system. The designer controls agents' entry into a queue and their exit, their service priority -- or queueing discipline -- as well as their information about queue priorities, while providing them with the incentive to join the queue and, importantly, to stay in the queue, when recommended by the designer. Under a mild condition, the optimal mechanism induces agents to enter up to a certain queue length and never removes any agents from the queue; serves them according to a first-come-first-served (FCFS) rule; and provides them with no information throughout the process beyond the recommendations they receive. FCFS is also necessary for optimality in a rich domain. We identify a novel role for queueing disciplines in regulating agents' beliefs and their dynamic incentives and uncover a hitherto unrecognized virtue of FCFS in this regard.
8
We introduce the Coarse Payoff-Assessment Learning (CPAL) model, which captures reinforcement learning by boundedly rational decision-makers who focus on the aggregate outcomes of choosing among exogenously defined clusters of alternatives (similarity classes), rather than evaluating each alternative individually. Analyzing a smooth approximation of the model, we show that the learning dynamics exhibit steady-states corresponding to smooth Valuation Equilibria (Jehiel and Samet, 2007). We demonstrate the existence of multiple equilibria in decision trees with generic payoffs and establish the local asymptotic stability of pure equilibria when they occur. Conversely, when trivial choices featuring alternatives within the same similarity class yield sufficiently high payoffs, a unique mixed equilibrium emerges, characterized by indifferences between similarity classes, even under acute sensitivity to payoff differences. Finally, we prove that this unique mixed equilibrium is globally asymptotically stable under the CPAL dynamics.
In this paper, we conduct a large-scale field experiment to investigate the manipulability of prediction markets. The main experiment involves randomly shocking prices across 817 separate markets; we then collect hourly price data to examine whether the effects of these shocks persist over time. We find that prediction markets can be manipulated: the effects of our trades are visible even 60 days after they have occurred. However, as predicted by our model, the effects of the manipulations somewhat fade over time. Markets with more traders, greater trading volume, and an external source of probability estimates are harder to manipulate.
People often face trade-offs between costs and benefits occurring at various points in time. The predominant discounting approach is to use the exponential form. Central to this approach is the discount rate, a unique parameter that converts a future value into its present equivalent. However, a universally accepted discount rate remains a matter of ongoing debate and lacks consensus. This paper provides a robust solution for resolving conflicts in discount rates, which recommends considering all discount rates but aims to assign varying degrees of importance to these rates. Moreover, a considerable number of economists support a theory that suggests equal consideration of future and present utilities. In response to this debate, we introduce a general criterion capable of accommodating situations where it is feasible not to discount future utilities. This criterion encompasses and extends various existing criteria in the literature.
There are no more papers matching your filters at the moment.