International School for Advanced Studies (SISSA)
We introduce an efficient method to quantify nonstabilizerness in fermionic Gaussian states, overcoming the long-standing challenge posed by their extensive entanglement. Using a perfect sampling scheme based on an underlying determinantal point process, we compute the Stabilizer Rényi Entropies (SREs) for systems with hundreds of qubits. Benchmarking on random Gaussian states with and without particle conservation, we reveal an extensive leading behavior equal to that of Haar random states, with logarithmic subleading corrections. We support these findings with analytical calculations for a set of related quantities, the participation entropies in the computational (or Fock) basis, for which we derive an exact formula. We also investigate the time evolution of non-stabilizerness in a random unitary circuit with Gaussian gates, observing that it converges in a time that scales logarithmically with the system size. Applying the sampling algorithm to a two-dimensional free-fermionic topological model, we uncover a sharp transition in non-stabilizerness at the phase boundaries, highlighting the power of our approach in exploring different phases of quantum many-body systems, even in higher dimensions.
·
Cosmic inflation provides a window to the highest energy densities accessible in nature, far beyond those achievable in any realistic terrestrial experiment. Theoretical insights into the inflationary era and its observational probes may therefore shed unique light on the physical laws underlying our universe. This white paper describes our current theoretical understanding of the inflationary era, with a focus on the statistical properties of primordial fluctuations. In particular, we survey observational targets for three important signatures of inflation: primordial gravitational waves, primordial non-Gaussianity and primordial features. With the requisite advancements in analysis techniques, the tremendous increase in the raw sensitivities of upcoming and planned surveys will translate to leaps in our understanding of the inflationary paradigm and could open new frontiers for cosmology and particle physics. The combination of future theoretical and observational developments therefore offer the potential for a dramatic discovery about the nature of cosmic acceleration in the very early universe and physics on the smallest scales.
Cosmic microwave background (CMB) photons are deflected by large-scale structure through gravitational lensing. This secondary effect introduces higher-order correlations in CMB anisotropies, which are used to reconstruct lensing deflections. This allows mapping of the integrated matter distribution along the line of sight, probing the growth of structure, and recovering an undistorted view of the last-scattering surface. Gravitational lensing has been measured by previous CMB experiments, with Planck\textit{Planck}'s 42σ42\,\sigma detection being the current best full-sky lensing map. We present an enhanced LiteBIRD\textit{LiteBIRD} lensing map by extending the CMB multipole range and including the minimum-variance estimation, leading to a 4949 to 58σ58\,\sigma detection over 80%80\,\% of the sky, depending on the final complexity of polarized Galactic emission. The combination of Planck\textit{Planck} and LiteBIRD\textit{LiteBIRD} will be the best full-sky lensing map in the 2030s, providing a 7272 to 78σ78\,\sigma detection over 80%80\,\% of the sky, almost doubling Planck\textit{Planck}'s sensitivity. Finally, we explore different applications of the lensing map, including cosmological parameter estimation using a lensing-only likelihood and internal delensing, showing that the combination of both experiments leads to improved constraints. The combination of Planck\textit{Planck} + LiteBIRD\textit{LiteBIRD} will improve the S8S_8 constraint by a factor of 2 compared to Planck\textit{Planck}, and Planck\textit{Planck} + LiteBIRD\textit{LiteBIRD} internal delensing will improve LiteBIRD\textit{LiteBIRD}'s tensor-to-scalar ratio constraint by 6%6\,\%. We have tested the robustness of our results against foreground models of different complexity, showing that improvements remains even for the most complex foregrounds.
We introduce a methodology to estimate non-stabilizerness or "magic", a key resource for quantum complexity, with Neural Quantum States (NQS). Our framework relies on two schemes based on Monte Carlo sampling to quantify non-stabilizerness via Stabilizer R\'enyi Entropy (SRE) in arbitrary variational wave functions. When combined with NQS, this approach is effective for systems with strong correlations and in dimensions larger than one, unlike Tensor Network methods. Firstly, we study the magic content in an ensemble of random NQS, demonstrating that neural network parametrizations of the wave function capture finite non-stabilizerness besides large entanglement. Secondly, we investigate the non-stabilizerness in the ground state of the J1J_1-J2J_2 Heisenberg model. In 1D, we find that the SRE vanishes at the Majumdar-Ghosh point J2=J1/2J_2 = J_1/2, consistent with a stabilizer ground state. In 2D, a dip in the SRE is observed near maximum frustration around $J_2/J_1 \approx 0.6$, suggesting a Valence Bond Solid between the two antiferromagnetic phases.
A Transformer-based neural network quantum state (NQS) precisely determines the ground-state phase diagram of the Shastry-Sutherland model, providing definitive evidence for an intermediate gapless quantum spin-liquid phase. This method achieved high accuracy, outperforming or matching prior numerical methods for this challenging frustrated magnet.
We study the behavior of magic as a bipartite correlation in the quantum Ising chain across its quantum phase transition, and at finite temperature. In order to quantify the magic of partitions rigorously, we formulate a hybrid scheme that combines stochastic sampling of reduced density matrices via quantum Monte Carlo, with state-of-the-art estimators for the robustness of magic - a {\it bona fide} measure of magic for mixed states. This allows us to compute the mutual robustness of magic for partitions up to 8 sites, embedded into a much larger system. We show how mutual robustness is directly related to critical behaviors: at the critical point, it displays a power law decay as a function of the distance between partitions, whose exponent is related to the partition size. Once finite temperature is included, mutual magic retains its low temperature value up to an effective critical temperature, whose dependence on size is also algebraic. This suggests that magic, differently from entanglement, does not necessarily undergo a sudden death.
·
Large language models (LLMs) are increasingly impacting human society, particularly in textual information. Based on more than 30,000 papers and 1,000 presentations from machine learning conferences, we examined and compared the words used in writing and speaking, representing the first large-scale study of how LLMs influence the two main modes of verbal communication and expression within the same group of people. Our empirical results show that LLM-style words such as "significant" have been used more frequently in abstracts and oral presentations. The impact on speaking is beginning to emerge and is likely to grow in the future, calling attention to the implicit influence and ripple effect of LLMs on human society.
Quantum computers and simulators offer unparalleled capabilities of probing quantum many-body states, by obtaining snapshots of the many-body wave function via collective projective measurements. The probability distribution obtained by such snapshots (which are fundamentally limited to a negligible fraction of the Hilbert space) is of fundamental importance to determine the power of quantum computations. However, its relation to many-body collective properties is poorly understood. Here, we develop a theoretical framework to link quantum phases of matter to their snapshots, based on a combination of data complexity and network theory analyses. The first step in our scheme consists of applying Occam's razor principle to quantum sampling: given snapshots of a wave function, we identify a minimal-complexity measurement basis by analyzing the information compressibility of snapshots over different measurement bases. The second step consists of analyzing arbitrary correlations using network theory, building a wave-function network from the minimal-complexity basis data. This approach allows us to stochastically classify the output of quantum computers and simulations, with no assumptions on the underlying dynamics, and in a fully interpretable manner. We apply this method to quantum states of matter in one-dimensional translational invariant systems, where such classification is exhaustive, and where it reveals an interesting interplay between algorithmic and computational complexity for many-body states. Our framework is of immediate experimental relevance, and can be further extended both in terms of more advanced network mathematics, including discrete homology, as well as in terms of applications to physical phenomena, such as time-dependent dynamics and gauge theories.
Foundation models are highly versatile neural-network architectures capable of processing different data types, such as text and images, and generalizing across various tasks like classification and generation. Inspired by this success, we propose Foundation Neural-Network Quantum States (FNQS) as an integrated paradigm for studying quantum many-body systems. FNQS leverage key principles of foundation models to define variational wave functions based on a single, versatile architecture that processes multimodal inputs, including spin configurations and Hamiltonian physical couplings. Unlike specialized architectures tailored for individual Hamiltonians, FNQS can generalize to physical Hamiltonians beyond those encountered during training, offering a unified framework adaptable to various quantum systems and tasks. FNQS enable the efficient estimation of quantities that are traditionally challenging or computationally intensive to calculate using conventional methods, particularly disorder-averaged observables. Furthermore, the fidelity susceptibility can be easily obtained to uncover quantum phase transitions without prior knowledge of order parameters. These pretrained models can be efficiently fine-tuned for specific quantum systems. The architectures trained in this paper are publicly available at this https URL, along with examples for implementing these neural networks in NetKet.
The Transformer architecture has become the state-of-art model for natural language processing tasks and, more recently, also for computer vision tasks, thus defining the Vision Transformer (ViT) architecture. The key feature is the ability to describe long-range correlations among the elements of the input sequences, through the so-called self-attention mechanism. Here, we propose an adaptation of the ViT architecture with complex parameters to define a new class of variational neural-network states for quantum many-body systems, the ViT wave function. We apply this idea to the one-dimensional J1J_1-J2J_2 Heisenberg model, demonstrating that a relatively simple parametrization gets excellent results for both gapped and gapless phases. In this case, excellent accuracies are obtained by a relatively shallow architecture, with a single layer of self-attention, thus largely simplifying the original architecture. Still, the optimization of a deeper structure is possible and can be used for more challenging models, most notably highly-frustrated systems in two dimensions. The success of the ViT wave function relies on mixing both local and global operations, thus enabling the study of large systems with high accuracy.
With a statistical analysis of arXiv paper abstracts, we report a marked drop in the frequency of several words previously identified as overused by ChatGPT, such as "delve", starting soon after they were pointed out in early 2024. The frequency of certain other words favored by ChatGPT, such as "significant", has instead kept increasing. These phenomena suggest that some authors of academic papers have adapted their use of large language models (LLMs), for example, by selecting outputs or applying modifications to the LLM-generated content. Such coevolution and cooperation of humans and LLMs thus introduce additional challenges to the detection of machine-generated text in real-world scenarios. Estimating the impact of LLMs on academic writing by examining word frequency remains feasible, and more attention should be paid to words that were already frequently employed, including those that have decreased in frequency due to LLMs' disfavor.
We present a measurement of the BB-mode polarization power spectrum of the cosmic microwave background (CMB) using taken from July 2014 to December 2016 with the POLARBEAR experiment. The CMB power spectra are measured using observations at 150 GHz with an instantaneous array sensitivity of NETarray=23μKs\mathrm{NET}_\mathrm{array}=23\, \mu \mathrm{K} \sqrt{\mathrm{s}} on a 670 square degree patch of sky centered at (RA, Dec)=(+0h12m0s,5918+0^\mathrm{h}12^\mathrm{m}0^\mathrm{s},-59^\circ18^\prime). A continuously rotating half-wave plate is used to modulate polarization and to suppress low-frequency noise. We achieve 32μK32\,\mu\mathrm{K}-arcmin\mathrm{arcmin} effective polarization map noise with a knee in sensitivity of =90\ell = 90, where the inflationary gravitational wave signal is expected to peak. The measured BB-mode power spectrum is consistent with a Λ\LambdaCDM lensing and single dust component foreground model over a range of multipoles $50 \leq \ell \leq 600.Thedatadisfavorzero. The data disfavor zero C_\ell^{BB}at at 2.2\sigma$ using this \ell range of POLARBEAR data alone. We cross-correlate our data with Planck high frequency maps and find the low-\ell BB-mode power in the combined dataset to be consistent with thermal dust emission. We place an upper limit on the tensor-to-scalar ratio r < 0.90 at 95% confidence level after marginalizing over foregrounds.
In this paper, we present a thorough analysis of the impact of Large Language Models (LLMs) on Wikipedia, examining the evolution of Wikipedia through existing data and using simulations to explore potential risks. We begin by analyzing page views and article content to study Wikipedia's recent changes and assess the impact of LLMs. Subsequently, we evaluate how LLMs affect various Natural Language Processing (NLP) tasks related to Wikipedia, including machine translation and retrieval-augmented generation (RAG). Our findings and simulation results reveal that Wikipedia articles have been influenced by LLMs, with an impact of approximately 1%-2% in certain categories. If the machine translation benchmark based on Wikipedia is influenced by LLMs, the scores of the models may become inflated, and the comparative results among models might shift as well. Moreover, the effectiveness of RAG might decrease if the knowledge base becomes polluted by LLM-generated content. While LLMs have not yet fully changed Wikipedia's language and knowledge structures, we believe that our empirical findings signal the need for careful consideration of potential future risks.
Feature selection is essential in the analysis of molecular systems and many other fields, but several uncertainties remain: What is the optimal number of features for a simplified, interpretable model that retains essential information? How should features with different units be aligned, and how should their relative importance be weighted? Here, we introduce the Differentiable Information Imbalance (DII), an automated method to rank information content between sets of features. Using distances in a ground truth feature space, DII identifies a low-dimensional subset of features that best preserves these relationships. Each feature is scaled by a weight, which is optimized by minimizing the DII through gradient descent. This allows simultaneously performing unit alignment and relative importance scaling, while preserving interpretability. DII can also produce sparse solutions and determine the optimal size of the reduced feature space. We demonstrate the usefulness of this approach on two benchmark molecular problems: (1) identifying collective variables that describe conformations of a biomolecule, and (2) selecting features for training a machine-learning force field. These results show the potential of DII in addressing feature selection challenges and optimizing dimensionality in various applications. The method is available in the Python library DADApy.
1
Free fermionic Gaussian, a.k.a. matchgate, random circuits exhibit atypical behavior compared to generic interacting systems. They produce anomalously slow entanglement growth, characterized by diffusive scaling S(t)tS(t) \sim \sqrt{t}, and evolve into volume-law entangled states at late times, SNS \sim N, which are highly unstable to measurements. Here, we investigate how doping such circuits with non-Gaussian resources (gates) restores entanglement structures of typical dynamics. We demonstrate that ballistic entanglement growth S(t)tS(t) \sim t is recovered after injecting an extensive total amount of non-Gaussian gates, also restoring Kardar-Parisi-Zhang fluctuations. When the evolution is perturbed with measurements, we uncover a measurement-induced phase transition between an area-law and a power-law entangled phase, SNαS \sim N^\alpha, with α\alpha controlled by the doping. A genuine volume-law entangled phase is recovered only when non-Gaussian gates are injected at an extensive rate. Our findings bridge the dynamics of free and interacting fermionic systems, identifying non-Gaussianity as a key resource driving the emergence of non-integrable behavior.
Researchers from Huazhong University of Science and Technology, École normale supérieure, and SISSA empirically investigate how Large Language Models (LLMs) influence and transform human-written code style. The study finds that LLM-preferred naming conventions are increasingly adopted in real-world GitHub repositories, although the overall impact on code complexity and maintainability shows no clear large-scale trend.
3
The upcoming Simons Observatory Small Aperture Telescopes aim at achieving a constraint on the primordial tensor-to-scalar ratio rr at the level of σ(r=0)0.003\sigma(r=0)\lesssim0.003, observing the polarized CMB in the presence of partial sky coverage, cosmic variance, inhomogeneous non-white noise, and Galactic foregrounds. We present three different analysis pipelines able to constrain rr given the latest available instrument performance, and compare their predictions on a set of sky simulations that allow us to explore a number of Galactic foreground models and elements of instrumental noise, relevant for the Simons Observatory. The three pipelines employ different combinations of parametric and non-parametric component separation at the map and power spectrum levels, and use B-mode purification to estimate the CMB B-mode power spectrum. We applied them to a common set of simulated realistic frequency maps, and compared and validated them with focus on their ability to extract robust constraints on the tensor-to-scalar ratio rr. We evaluated their performance in terms of bias and statistical uncertainty on this parameter. In most of the scenarios the three methodologies achieve similar performance. Nevertheless, several simulations with complex foreground signals lead to a >2σ>2\sigma bias on rr if analyzed with the default versions of these pipelines, highlighting the need for more sophisticated pipeline components that marginalize over foreground residuals. We show two such extensions, using power-spectrum-based and map-based methods, that are able to fully reduce the bias on rr below the statistical uncertainties in all foreground models explored, at a moderate cost in terms of σ(r)\sigma(r).
Information propagation characterizes how input correlations evolve across layers in deep neural networks. This framework has been well studied using mean-field theory, which assumes infinitely wide networks. However, these assumptions break down for practical, finite-size networks. In this work, we study information propagation in randomly initialized neural networks with finite width and reveal that the boundary between ordered and chaotic regimes exhibits a fractal structure. This shows the fundamental complexity of neural network dynamics, in a setting that is independent of input data and optimization. To extend this analysis beyond multilayer perceptrons, we leverage recently introduced Fourier-based structured transforms, and show that information propagation in convolutional neural networks also follow the same behavior. Our investigation highlights the importance of finite network depth with respect to the tradeoff between separation and robustness.
Properties of twist grain boundaries (TGB), long known structurally but not tribologically, are simulated under sliding and load, with Au(111) our test case. The load-free TGB moiré is smooth and superlubric at incommensurate twists. Strikingly, load provokes a first-order structural transformation, where the highest energy moiré nodes are removed -- an Aubry-type transition for which we provide a Landau theory and a twist-load phase diagram. Upon frictional sliding, the transformation causes a superlubric-locked transition, with a huge friction jump, and irreversible plastic flow. The predicted phenomena are robust, also recovered in a Lennard-Jones lattice TGB, and not exclusive to gold or to metals.
The interplay between electron correlation and nuclear quantum effects makes our understanding of elemental hydrogen a formidable challenge. Here, we present the phase diagram of hydrogen and deuterium at low temperatures and high-pressure (P>300P > 300 GPa by accounting for highly accurate electronic and nuclear enthalpies. We evaluated internal electronic energies by diffusion quantum Monte Carlo, while nuclear quantum motion and anharmonicity have been included by the stochastic self-consistent harmonic approximation. Our results show that the long-sought atomic metallic hydrogen, predicted to host room-temperature superconductivity, forms at 577±10577\pm 10 GPa (640±14640\pm 14 GPa in deuterium). Indeed, anharmonicity pushes the stability of this phase towards pressures much larger than previous theoretical estimates or attained experimental values. Before atomization, molecular hydrogen transforms from a conductive phase III to another metallic structure that is still molecular (phase VI) at 422±40422\pm 40 GPa (442±30442\pm30 GPa in deuterium). We predict clear-cut signatures in optical spectroscopy and DC conductivity that can be used experimentally to distinguish between the two structural transitions. According to our findings, the experimental evidence of metallic hydrogen has so far been limited to molecular phases.
There are no more papers matching your filters at the moment.