Centro Ricerche Enrico Fermi
Accessibility is essential for designing inclusive urban systems. However, the attempt to capture the complexity of accessibility in a single universal metric has often limited its effective use in design, measurement, and governance across various fields. Building on the work of Levinson and Wu, we emphasise that accessibility consists of several key dimensions. Specifically, we introduce a conceptual framework that defines accessibility through three main dimensions: Proximity (which pertains to active, short-range accessibility to local services and amenities), Opportunity (which refers to quick access to relevant non-local resources, such as jobs or major cultural venues), and Value (which encompasses the overall quality and personal significance assigned to specific points of interest). While it is generally beneficial to improve accessibility, different users and contexts present unique trade-offs that make a one-size-fits-all solution neither practical nor desirable. Our framework establishes a foundation for a quantitative and integrative approach to modelling accessibility. It considers the complex interactions among its various dimensions and facilitates more systematic analysis, comparison, and decision-making across diverse contexts.
Standard cosmological analysis, which relies on two-point statistics, fails to extract the full information of the data. This limits our ability to constrain with precision cosmological parameters. Thus, recent years have seen a paradigm shift from analytical likelihood-based to simulation-based inference. However, such methods require a large number of costly simulations. We focus on full-field inference, considered the optimal form of inference. Our objective is to benchmark several ways of conducting full-field inference to gain insight into the number of simulations required for each method. We make a distinction between explicit and implicit full-field inference. Moreover, as it is crucial for explicit full-field inference to use a differentiable forward model, we aim to discuss the advantages of having this property for the implicit approach. We use the sbi_lens package which provides a fast and differentiable log-normal forward model. This forward model enables us to compare explicit and implicit full-field inference with and without gradient. The former is achieved by sampling the forward model through the No U-Turns sampler. The latter starts by compressing the data into sufficient statistics and uses the Neural Likelihood Estimation algorithm and the one augmented with gradient. We perform a full-field analysis on LSST Y10 like weak lensing simulated mass maps. We show that explicit and implicit full-field inference yield consistent constraints. Explicit inference requires 630 000 simulations with our particular sampler corresponding to 400 independent samples. Implicit inference requires a maximum of 101 000 simulations split into 100 000 simulations to build sufficient statistics (this number is not fine tuned) and 1 000 simulations to perform inference. Additionally, we show that our way of exploiting the gradients does not significantly help implicit inference.
Proximity-based cities have attracted much attention in recent years. The 15-minute city, in particular, heralded a new vision for cities where essential services must be easily accessible. Despite its undoubted merit in stimulating discussion on new organisations of cities, the 15-minute city cannot be applicable everywhere, and its very definition raises a few concerns. Here, we tackle the feasibility and practicability of the '15-minute city' model in many cities worldwide. We provide a worldwide quantification of how close cities are to the ideal of the 15-minute city. To this end, we measure the accessibility times to resources and services, and we reveal strong heterogeneity of accessibility within and across cities, with a significant role played by local population densities. We provide an online platform (\href{whatif.sonycsl.it/15mincity}{whatif.sonycsl.it/15mincity}) to access and visualise accessibility scores for virtually all cities worldwide. The heterogeneity of accessibility within cities is one of the sources of inequality. We thus simulate how much a better redistribution of resources and services could heal inequity by keeping the same resources and services or by allowing for virtually infinite resources. We highlight pronounced discrepancies among cities in the minimum number of additional services needed to comply with the 15-minute city concept. We conclude that the proximity-based paradigm must be generalised to work on a wide range of local population densities. Finally, socio-economic and cultural factors should be included to shift from time-based to value-based cities.
We use the rotation curve from Gaia data release (DR) 3 to estimate the mass of the Milky Way. We consider an Einasto density profile to model the dark matter component. We extrapolate and obtain a dynamical mass M=2.750.48+3.11×1011MM=2.75^{+3.11}_{-0.48}\times 10^{11} M_\odot at 112112 kpc. This lower-mass Milky Way is consistent with the significant declining rotation curve, and can provide new insights into our Galaxy and halo inhabitants.
Defining and finalizing Mergers and Acquisitions (M&A) requires complex human skills, which makes it very hard to automatically find the best partner or predict which firms will make a deal. In this work, we propose the MASS algorithm, a specifically designed measure of similarity between companies and we apply it to patenting activity data to forecast M&A deals. MASS is based on an extreme simplification of tree-based machine learning algorithms and naturally incorporates intuitive criteria for deals; as such, it is fully interpretable and explainable. By applying MASS to the Zephyr and Crunchbase datasets, we show that it outperforms LightGCN, a "black box" graph convolutional network algorithm. When similar companies have disjoint patenting activities, on the contrary, LightGCN turns out to be the most effective algorithm. This study provides a simple and powerful tool to model and predict M&A deals, offering valuable insights to managers and practitioners for informed decision-making.
We introduce a novel proxy for firm linkages, Characteristic Vector Linkages (CVLs). We use this concept to estimate firm linkages, first through Euclidean similarity, and then by applying Quantum Cognition Machine Learning (QCML) to similarity learning. We demonstrate that both methods can be used to construct profitable momentum spillover trading strategies, but QCML similarity outperforms the simpler Euclidean similarity.
Initial conditions in cosmological NN-body simulations are typically generated by displacing particles from a regular cubic lattice using a correlated field derived from the linear power spectrum, often via the Zel'dovich approximation. While this procedure reproduces the target two-point statistics (e.g., the power spectrum or correlation function), it introduces subtle anisotropies due to the underlying lattice structure. These anisotropies, invisible to angle-averaged diagnostics, become evident through directional measures such as the Angular Distribution of Pairwise Distances. Analyzing two Cold Dark Matter simulations with varying resolutions, initial redshifts, and box sizes, we show that these anisotropies are not erased but are amplified by gravitational evolution. They seed filamentary structures that persist into the linear regime, remaining visible even at redshift z=0z = 0. Our findings demonstrate that such features are numerical artifacts -- emerging from the anisotropic coupling between the displacement field and the lattice -- not genuine predictions of an isotropic cosmological model. These results underscore the importance of critically reassessing how initial conditions are constructed, particularly when probing the large-scale, quasi-linear regime of structure formation.
In early 2021, the stock prices of GameStop, AMC, Nokia and BlackBerry experienced dramatic increases, triggered by short-squeeze operations that have been largely attributed to Reddit's retail investors. Here, we shed light on the extent and timing of Reddit users' influence on the GameStop short squeeze. Using statistical analysis tools with high temporal resolution, we find that increasing Reddit discussions anticipated high trading volumes. This effect emerged abruptly a few weeks before the event but waned once the community gained widespread visibility through Twitter. Meanwhile, the collective investment of the community quantified through posts of individual positions, closely mirrored the market capitalization of the stock. This evidence suggests a coordinated action of users in developing a shared financial strategy through social media--targeting GameStop first and other stocks afterward. Overall, our results provide novel insights into the role of Reddit users in the dynamics of the GameStop short squeeze.
The distribution of urban services reveals critical patterns of human activity and accessibility. Proximity to amenities like restaurants, banks, and hospitals can reduce access barriers, but these services are often unevenly distributed, exacerbating spatial inequalities and socioeconomic disparities. In this study, we present a novel accessibility measure based on the spatial distribution of Points of Interest (POIs) within cities. Using the radial distribution function from statistical physics, we analyze the dispersion of services across different urban zones, combining local and remote access to services. This approach allows us to identify a city's central core, intermediate areas or secondary cores, and its periphery. Comparing the areas that we find with the resident population distribution highlights clusters of urban services and helps uncover disparities in access to opportunities.
Doubly Special Relativity (DSR) models are characterized by the deformation of relativistic symmetries at the Planck scale and constitute one of the cornerstones for quantum gravity phenomenology research, due to the possibility of testing them with cosmological messengers. Some of their predictions manifest themselves as relative locality effects, implying that events local to an observer might not appear to be so for a distant one. In this work we focus on transverse relative locality models, where the delocalization occurs along the direction perpendicular to the one connecting two distant observers. We present the first generalization of these models in curved spacetime, constructing a transverse deformation of the de Sitter algebra in 2 + 1 D and investigating its phenomenological implications on particle propagation.
Pauli Exclusion Principle (PEP) violations induced by space-time non-commutativity, a class of universality for several models of Quantum Gravity, are investigated by the VIP-2 Lead experiment at the Gran Sasso underground National Laboratory of INFN. The VIP-2 Lead experimental bound on the non-commutative space-time scale Λ\Lambda excludes θ\theta-Poincaré far above the Planck scale for non vanishing ``electric-like" components of θμν\theta_{\mu \nu}, and up to 6.91026.9 \cdot 10^{-2} Planck scales if they are null. Therefore, this new bound represents the tightest one so far provided by atomic transitions tests. This result strongly motivates high sensitivity underground X-ray measurements as critical tests of Quantum Gravity and of the very microscopic space-time structure.
We use algorithmic and network-based tools to build and analyze the bipartite network connecting jobs with the skills they require. We quantify and represent the relatedness between jobs and skills by using statistically validated networks. Using the fitness and complexity algorithm, we compute a skill-based complexity of jobs. This quantity is positively correlated with the average salary, abstraction, and non-routinarity level of jobs. Furthermore, coherent jobs - defined as the ones requiring closely related skills - have, on average, lower wages. We find that salaries may not always reflect the intrinsic value of a job, but rather other wage-setting dynamics that may not be directly related to its skill composition. Our results provide valuable information for policymakers, employers, and individuals to better understand the dynamics of the labor market and make informed decisions about their careers.
The circular velocity curve traced by stars provides a direct means of investigating the potential and mass distribution of the Milky Way. Recent measurements of the Galaxy's rotation curve have revealed a significant decrease in velocity for galactic radii larger than approximately 15 kpc. While these determinations have primarily focused on the Galactic plane, the Gaia DR3 data also offer information about off-plane velocity components. By assuming the Milky Way is in a state of Jeans equilibrium, we derived the generalized rotation curve for radial distances spanning from 8.5 kpc to 25 kpc and vertical heights ranging from -2 kpc to 2 kpc. These measurements were employed to constrain the matter distribution using two distinct mass models. The first is the canonical NFW halo model, while the second, the dark matter disk (DMD) model, posits that dark matter is confined to the Galactic plane and follows the distribution of neutral hydrogen. The best-fitting NFW model yields a virial mass of Mvir=(6.5±0.5)×1011MM_{\text{vir}} = (6.5 \pm 0.5) \times 10^{11} M_\odot, whereas the DMD model indicates a total mass of MDMD=(1.7±0.2)×1011MM_{\text{DMD}} = (1.7 \pm 0.2) \times 10^{11} M_\odot. Our findings indicate that the DMD model generally shows a better fit to both the on-plane and off-plane behaviors at large radial distances of the generalized rotation curves when compared to the NFW model. We emphasize that studying the generalized rotation curves at different vertical heights has the potential to provide better constraints on the geometrical properties of the dark matter distribution.
Traditionally, weak lensing cosmological surveys have been analyzed using summary statistics motivated by their analytically tractable likelihoods, or by their ability to access higher-order information, at the cost of requiring Simulation-Based Inference (SBI) approaches. While informative, these statistics are neither designed nor guaranteed to be statistically sufficient. With the rise of deep learning, it becomes possible to create summary statistics optimized to extract the full data information. We compare different neural summarization strategies proposed in the weak lensing literature, to assess which loss functions lead to theoretically optimal summary statistics to perform full-field inference. In doing so, we aim to provide guidelines and insights to the community to help guide future neural-based inference analyses. We design an experimental setup to isolate the impact of the loss function used to train neural networks. We have developed the sbi_lens JAX package, which implements an automatically differentiable lognormal wCDM LSST-Y10 weak lensing simulator. The explicit full-field posterior obtained using the Hamiltonian Monte Carlo sampler gives us a ground truth to which to compare different compression strategies. We provide theoretical insight into the loss functions used in the literature and show that some do not necessarily lead to sufficient statistics (e.g. Mean Square Error (MSE)), while those motivated by information theory (e.g. Variational Mutual Information Maximization (VMIM)) can. Our numerical experiments confirm these insights and show, in our simulated wCDM scenario, that the Figure of Merit (FoM) of an analysis using neural summaries optimized under VMIM achieves 100% of the reference Omega_c - sigma_8 full-field FoM, while an analysis using neural summaries trained under MSE achieves only 81% of the same reference FoM.
In the context of quantum information, highly nonlinear regimes, such as those supporting solitons, are marginally investigated. We miss general methods for quantum solitons, although they can act as entanglement generators or as self-organized quantum processors. We develop a computational approach that uses a neural network as a variational ansatz for quantum solitons in an array of waveguides. By training the resulting phase-space quantum machine learning model, we find different soliton solutions varying the number of particles and interaction strength. We consider Gaussian states that enable measuring the degree of entanglement and sampling the probability distribution of many-particle events. We also determine the probability of generating particle pairs and unveil that soliton bound states emit correlated pairs. These results may have a role in boson sampling with nonlinear systems and in quantum processors for entangled nonlinear waves.
Large language models (LLMs) are increasingly deployed in collaborative tasks involving multiple agents, forming an "AI agent society: where agents interact and influence one another. Whether such groups can spontaneously coordinate on arbitrary decisions without external influence - a hallmark of self-organized regulation in human societies - remains an open question. Here we investigate the stability of groups formed by AI agents by applying methods from complexity science and principles from behavioral sciences. We find that LLMs can spontaneously form cohesive groups, and that their opinion dynamics is governed by a majority force coefficient, which determines whether coordination is achievable. This majority force diminishes as group size increases, leading to a critical group size beyond which coordination becomes practically unattainable and stability is lost. Notably, this critical group size grows exponentially with the language capabilities of the models, and for the most advanced LLMs, it exceeds the typical size of informal human groups. Our findings highlight intrinsic limitations in the self-organization of AI agent societies and have implications for the design of collaborative AI systems where coordination is desired or could represent a treat.
The field of complex photonics has garnered significant interest due to its rich physics and myriad applications spanning physics, engineering, biology, and medicine. However, a substantial portion of research focuses primarily on the linear medium. Over the years, optical nonlinearity, particularly the second order denoted as χ(2)\chi^{(2)}, has been harnessed for diverse applications such as frequency conversions, three-wave mixing, material characterizations, and bio-imaging. When χ(2)\chi^{(2)}-nonlinearity combines with the disorder, a new realm of physics emerges, which in the last 30 years has witnessed substantial progress in fundamental studies and futuristic applications. This review aims to explore fundamental concepts concerning χ(2)\chi^{(2)}-nonlinear disordered media, chart the field's evolution, highlight current interests within the research community, and conclude with a future perspective.
Patents serve as valuable indicators of innovation and provide insights into the spaces of innovation and venture formation within geographic regions. In this study, we utilise patent data to examine the dynamics of innovation and venture formation in the biotech sector across the United Kingdom (UK). By analysing patents, we identify key regions that drive biotech innovation in the UK. Our findings highlight the crucial role of biotech incubators in facilitating knowledge exchange between scientific research and industry. However, we observe that the incubators themselves do not significantly contribute to the diversity of innovations which might be due to the underlying effect of geographic proximity on the influences and impact of the patents. These insights contribute to our understanding of the historical development and future prospects of the biotech sector in the UK, emphasising the importance of promoting innovation diversity and fostering inclusive enterprise for achieving equitable economic growth.
Scientific literature has been growing exponentially for decades, with publications from the last twenty years now comprising 60% of all academic output. While the impact of information overload on news and social-media consumption is well-documented, its consequences on scientific progress remain understudied. Here, we investigate how this rapid expansion affects the circulation and exploitation of scientific ideas. Unlike other cultural domains, science is experiencing a decline in the proportion of highly influential papers and a slower turnover in its canons. This results in the disproportionate persistence of established works, a phenomenon we term the ``gerontocratization of science''. To test whether hypergrowth drives this trend, we develop a generative citation model that incorporates random discovery, cumulative advantage, and exponential growth of the scientific literature. Our findings reveal that as scientific output expands exponentially, gerontocratization emerges and intensifies, reducing the influence of new research. Recognizing and understanding this mechanism is crucial for developing targeted strategies to sustain intellectual dynamism and ensure a balanced and healthy renewal of scientific knowledge.
A central role in shaping the experience of users online is played by recommendation algorithms. On the one hand they help retrieving content that best suits users taste, but on the other hand they may give rise to the so called "filter bubble" effect, favoring the rise of polarization. In the present paper we study how a user-user collaborative-filtering algorithm affects the behavior of a group of agents repeatedly exposed to it. By means of analytical and numerical techniques we show how the system stationary state depends on the strength of the similarity and popularity biases, quantifying respectively the weight given to the most similar users and to the best rated items. In particular, we derive a phase diagram of the model, where we observe three distinct phases: disorder, consensus and polarization. In the latter users spontaneously split into different groups, each focused on a single item. We identify, at the boundary between the disorder and polarization phases, a region where recommendations are nontrivially personalized without leading to filter bubbles. Finally, we show that our model can reproduce the behavior of users in the online music platform this http URL. This analysis paves the way to a systematic analysis of recommendation algorithms by means of statistical physics methods and opens to the possibility of devising less polarizing recommendation algorithms.
There are no more papers matching your filters at the moment.