Instituut-Lorentz for Theoretical PhysicsUniversiteit Leiden
An empirical study evaluated the impact of the temperature parameter on Large Language Model creativity, revealing that higher temperatures yield only a weak positive correlation with perceived novelty (β=0.308) while moderately decreasing coherence (β=0.240). The research further demonstrated no statistically significant effect of temperature on story typicality or cohesion, questioning its role as a primary creativity control.
Memory Gym presents a suite of 2D partially observable environments, namely Mortar Mayhem, Mystery Path, and Searing Spotlights, designed to benchmark memory capabilities in decision-making agents. These environments, originally with finite tasks, are expanded into innovative, endless formats, mirroring the escalating challenges of cumulative memory games such as "I packed my bag". This progression in task design shifts the focus from merely assessing sample efficiency to also probing the levels of memory effectiveness in dynamic, prolonged scenarios. To address the gap in available memory-based Deep Reinforcement Learning baselines, we introduce an implementation within the open-source CleanRL library that integrates Transformer-XL (TrXL) with Proximal Policy Optimization. This approach utilizes TrXL as a form of episodic memory, employing a sliding window technique. Our comparative study between the Gated Recurrent Unit (GRU) and TrXL reveals varied performances across our finite and endless tasks. TrXL, on the finite environments, demonstrates superior effectiveness over GRU, but only when utilizing an auxiliary loss to reconstruct observations. Notably, GRU makes a remarkable resurgence in all endless tasks, consistently outperforming TrXL by significant margins. Website and Source Code: this https URL
Distributed quantum information processing seeks to overcome the scalability limitations of monolithic quantum devices by interconnecting multiple quantum processing nodes via classical and quantum communication. This approach extends the capabilities of individual devices, enabling access to larger problem instances and novel algorithmic techniques. Beyond increasing qubit counts, it also enables qualitatively new capabilities, such as joint measurements on multiple copies of high-dimensional quantum states. The distinction between single-copy and multi-copy access reveals important differences in task complexity and helps identify which computational problems stand to benefit from distributed quantum resources. At the same time, it highlights trade-offs between classical and quantum communication models and the practical challenges involved in realizing them experimentally. In this review, we contextualize recent developments by surveying the theoretical foundations of distributed quantum protocols and examining the experimental platforms and algorithmic applications that realize them in practice.
Quantum Hall edge channels partition electric charge over N chiral (uni-directional) modes. Intermode scattering leads to partition noise, observed in graphene p-n junctions. While inelastic scattering suppresses this noise by averaging out fluctuations, we show that pure (quasi-elastic) dephasing may enhance the partition noise. The noise power increases by up to 50% for two modes, with a general enhancement factor of 1+1/N in the strong-dephasing limit. This counterintuitive effect is explained in the framework of monitored quantum transport, arising from the self-averaging of quantum trajectories.
University of Cambridge logoUniversity of CambridgeUniversity of BernUniversity of EdinburghETH Zürich logoETH ZürichTechnische Universität DresdenUniversity of PisaStockholm University logoStockholm UniversitySorbonne Université logoSorbonne UniversitéUniversity of TurkuLeiden University logoLeiden UniversityUniversity of GenevaUniversity of BelgradeUniversity of ViennaUniversity of LeicesterUniversity of VigoUniversiteit LeidenObservatoire de ParisUniversité de LiègeINAF - Osservatorio Astrofisico di TorinoUniversity of Groningen logoUniversity of GroningenUniversity of BathLund UniversityUniversity of LausanneInstituto de Astrofísica de CanariasUniversity of AntioquiaEuropean Space AgencyUniversidad de ValparaísoUniversité de MonsELTE Eötvös Loránd UniversityUniversity of BordeauxObservatoire de la Côte d’AzurFaculdade de Ciências da Universidade de LisboaUniversity of BarcelonaMax Planck Institute for AstronomyNational Observatory of AthensUniversité de Paris-SaclayInstituto de Astrofísica de AndalucíaUniversité de Franche-ComtéINAF – Osservatorio Astronomico di RomaKatholieke Universiteit LeuvenRoyal Observatory of BelgiumSpace Research InstituteUniversité de RennesUniversity of AarhusKonkoly ObservatoryTartu ObservatoryHellenic Open UniversityARI, Zentrum für Astronomie der Universität HeidelbergCopernicus Astronomical CenterESAC, Villanueva de la CañadaAstronomical Observatory of TurinUniversité de BesançonCENTRA, Universidade de LisboaUniversité de NiceObservatoire de la Côte d'Azur, CNRSINAF – Osservatorio Astronomico di CataniaUniversit catholique de LouvainUniversit de ToulouseUniversit Libre de BruxellesINAF Osservatorio Astronomico di CapodimonteUniversit de LorraineAix-Marseille Universit",Universit de StrasbourgUniversit de LilleINAF Osservatorio Astrofisico di ArcetriINAF Osservatorio Astronomico di PadovaUniversit de MontpellierINAF Osservatorio di Astrofisica e Scienza dello Spazio di Bologna
The Gaia Galactic survey mission is designed and optimized to obtain astrometry, photometry, and spectroscopy of nearly two billion stars in our Galaxy. Yet as an all-sky multi-epoch survey, Gaia also observes several million extragalactic objects down to a magnitude of G~21 mag. Due to the nature of the Gaia onboard selection algorithms, these are mostly point-source-like objects. Using data provided by the satellite, we have identified quasar and galaxy candidates via supervised machine learning methods, and estimate their redshifts using the low resolution BP/RP spectra. We further characterise the surface brightness profiles of host galaxies of quasars and of galaxies from pre-defined input lists. Here we give an overview of the processing of extragalactic objects, describe the data products in Gaia DR3, and analyse their properties. Two integrated tables contain the main results for a high completeness, but low purity (50-70%), set of 6.6 million candidate quasars and 4.8 million candidate galaxies. We provide queries that select purer sub-samples of these containing 1.9 million probable quasars and 2.9 million probable galaxies (both 95% purity). We also use high quality BP/RP spectra of 43 thousand high probability quasars over the redshift range 0.05-4.36 to construct a composite quasar spectrum spanning restframe wavelengths from 72-100 nm.
University of Toronto logoUniversity of TorontoCalifornia Institute of Technology logoCalifornia Institute of TechnologyUniversity of Pittsburgh logoUniversity of PittsburghUniversity of OsloChinese Academy of Sciences logoChinese Academy of SciencesUniversity of Southern California logoUniversity of Southern CaliforniaUniversity College London logoUniversity College LondonUniversity of Oxford logoUniversity of OxfordUniversity of California, Irvine logoUniversity of California, IrvineUniversity of Copenhagen logoUniversity of CopenhagenUniversity of EdinburghETH Zürich logoETH ZürichUniversity of British Columbia logoUniversity of British ColumbiaRutherford Appleton LaboratoryUniversity of Maryland logoUniversity of MarylandUniversité Paris-Saclay logoUniversité Paris-SaclayStockholm University logoStockholm UniversityUniversity of HelsinkiInstituto de Física Teórica UAM-CSICTechnical University of Munich logoTechnical University of MunichCEA logoCEAUniversity of GenevaUniversity of PortsmouthConsejo Superior de Investigaciones CientíficasUniversità di GenovaUniversiteit LeidenUniversity of SussexUniversité Côte d’AzurINAFUniversity of CaliforniaJet Propulsion LaboratoryInstituto de Astrofísica de CanariasUniversity of NottinghamEuropean Space AgencySISSAUniversidad de CantabriaUniversity of Hawai’iUniversity of KwaZulu-NatalLudwig-Maximilians-UniversitätNational Observatory of AthensLaboratoire d’Astrophysique de MarseilleUniversidad de AtacamaMax-Planck Institut für extraterrestrische PhysikInstitut d’Estudis Espacials de CatalunyaINAF–Osservatorio Astronomico di PadovaUniversité Claude Bernard LyonDeutsches Elektronen SynchrotronInstitut de Physique des 2 Infinis de LyonINAF-IASF MilanoUniversità di FirenzeUniversity of RomeTuorla ObservatoryINAF-Osservatorio Astronomico di BolognaUniversità degli Studi di Roma TreIstituto Nazionale di Fisica Nucleare, Sezione di PadovaInstitute for Advanced Study, Technische Universität MünchenInstituto de Astrofísica e Ciências do Espaço, Universidade de LisboaUniversité Paris-Saclay, CNRS, CEAINAF - Osservatorio Astronomico di TorinoIstituto Nazionale di Fisica Nucleare, Sezione di Roma TreUniversité Paris-Saclay, CNRS, Institut d'astrophysique spatialeUniversité Paris-Saclay, CNRSIstituto Nazionale di Fisica Nucleare, Sezione di NapoliUniversité de Paris, CNRSSpace Science Data Center - Italian Space AgencyINAF-Osservatorio Astronomico di Bologna, Sezione di BolognaINAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Sezione di BolognaUniversity of Sussex, Astronomy CentreUniversité Paris-Saclay, CNRS, Université Paris CitéUniversità di BonnUniversità di Trieste, Sezione di TriesteUniversité de Genève, Observatoire de GenèveIstituto Nazionale di Astrofisica, Sezione di BolognaUniversit Grenoble AlpesUniversit del SalentoUniversit di FerraraINAF Osservatorio Astronomico di CapodimonteUniversit de LorraineAix-Marseille Universit",Universit de StrasbourgUniversit di PisaUniversit di PadovaUniversit degli Studi di MilanoUniversit de MontpellierUniversit degli Studi di Napoli Federico IIUniversit di Roma Tor VergataINAF Osservatorio di Astrofisica e Scienza dello Spazio di BolognaUniversit Di BolognaUniversit degli Studi di Trieste
The ESA Euclid mission will measure the photometric redshifts of billions of galaxies in order to provide an accurate 3D view of the Universe at optical and near-infrared wavelengths. Photometric redshifts are determined by the PHZ processing function on the basis of the multi-wavelength photometry of Euclid and ground-based observations. In this paper, we describe the PHZ processing used for the Euclid Quick Data Release, the output products, and their validation. The PHZ pipeline is responsible for the following main tasks: source classification into star, galaxy, and QSO classes based on photometric colours; determination of photometric redshifts and of physical properties of galaxies. The classification is able to provide a star sample with a high level of purity, a highly complete galaxy sample, and reliable probabilities of belonging to those classes. The identification of QSOs is more problematic: photometric information seems to be insufficient to accurately separate QSOs from galaxies. The performance of the pipeline in the determination of photometric redshifts has been tested using the COSMOS2020 catalogue and a large sample of spectroscopic redshifts. The results are in line with expectations: the precision of the estimates are compatible with Euclid requirements, while, as expected, a bias correction is needed to achieve the accuracy level required for the cosmological probes. Finally, the pipeline provides reliable estimates of the physical properties of galaxies, in good agreement with findings from the COSMOS2020 catalogue, except for an unrealistically large fraction of very young galaxies with very high specific star-formation rates. The application of appropriate priors is, however, sufficient to obtain reliable physical properties for those problematic objects. We present several areas for improvement for future Euclid data releases.
Continuous-variable quantum systems are foundational to quantum computation, communication, and sensing. While traditional representations using wave functions or density matrices are often impractical, the tomographic picture of quantum mechanics provides an accessible alternative by associating quantum states with classical probability distribution functions called tomograms. Despite its advantages, including compatibility with classical statistical methods, tomographic method remain underutilized due to a lack of robust estimation techniques. This work addresses this gap by introducing a non-parametric \emph{kernel quantum state estimation} (KQSE) framework for reconstructing quantum states and their trace characteristics from noisy data, without prior knowledge of the state. In contrast to existing methods, KQSE yields estimates of the density matrix in various bases, as well as trace quantities such as purity, higher moments, overlap, and trace distance, with a near-optimal convergence rate of O~(T1)\tilde{O}\bigl(T^{-1}\bigr), where TT is the total number of measurements. KQSE is robust for multimodal, non-Gaussian states, making it particularly well suited for characterizing states essential for quantum science.
The fidelity of operations on a solid-state quantum processor is ultimately bounded by decoherence effects induced by a fluctuating environment. Characterizing environmental fluctuations is challenging because the acquisition time of experimental protocols limits the precision with which the environment can be measured and may obscure the detailed structure of these fluctuations. Here we present a real-time Bayesian method for estimating the relaxation rate of a qubit, leveraging a classical controller with an integrated field-programmable gate array (FPGA). Using our FPGA-powered Bayesian method, we adaptively and continuously track the relaxation-time fluctuations of two fixed-frequency superconducting transmon qubits, which exhibit average relaxation times of approximately 0.17 ms and occasionally exceed 0.5 ms. Our technique allows for the estimation of these relaxation times in a few milliseconds, more than two orders of magnitude faster than previous nonadaptive methods, and allows us to observe fluctuations up to 5 times the qubit's average relaxation rates on significantly shorter timescales than previously reported. Our statistical analysis reveals that these fluctuations occur on much faster timescales than previously understood, with two-level-system switching rates reaching up to 10 Hz. Our work offers an appealing solution for rapid relaxation-rate characterization in device screening and for improved understanding of fast relaxation dynamics.
The FAIR Universe HiggsML Uncertainty Challenge focused on measuring the physical properties of elementary particles with imperfect simulators. Participants were required to compute and report confidence intervals for a parameter of interest regarding the Higgs boson while accounting for various systematic (epistemic) uncertainties. The dataset is a tabular dataset of 28 features and 280 million instances. Each instance represents a simulated proton-proton collision as observed at CERN's Large Hadron Collider in Geneva, Switzerland. The features of these simulations were chosen to capture key characteristics of different types of particles. These include primary attributes, such as the energy and three-dimensional momentum of the particles, as well as derived attributes, which are calculated from the primary ones using domain-specific knowledge. Additionally, a label feature designates each instance's type of proton-proton collision, distinguishing the Higgs boson events of interest from three background sources. As outlined in this paper, the permanent release of the dataset allows long-term benchmarking of new techniques. The leading submissions, including Contrastive Normalising Flows and Density Ratios estimation through classification, are described. Our challenge has brought together the physics and machine learning communities to advance our understanding and methodologies in handling systematic uncertainties within AI techniques.
Symmetry-protected topological phases cannot be described by any local order parameter and are beyond the conventional symmetry-breaking paradigm for understanding quantum matter. They are characterized by topological boundary states robust against perturbations that respect the protecting symmetry. In a clean system without disorder, these edge modes typically only occur for the ground states of systems with a bulk energy gap and would not survive at finite temperatures due to mobile thermal excitations. Here, we report the observation of a distinct type of topological edge modes, which are protected by emergent symmetries and persist even up to infinite temperature, with an array of 100 programmable superconducting qubits. In particular, through digital quantum simulation of the dynamics of a one-dimensional disorder-free "cluster" Hamiltonian, we observe robust long-lived topological edge modes over up to 30 cycles at a wide range of temperatures. By monitoring the propagation of thermal excitations, we show that despite the free mobility of these excitations, their interactions with the edge modes are substantially suppressed in the dimerized regime due to an emergent U(1)×\timesU(1) symmetry, resulting in an unusually prolonged lifetime of the topological edge modes even at infinite temperature. In addition, we exploit these topological edge modes as logical qubits and prepare a logical Bell state, which exhibits persistent coherence in the dimerized and off-resonant regime, despite the system being disorder-free and far from its ground state. Our results establish a viable digital simulation approach to experimentally exploring a variety of finite-temperature topological phases and demonstrate a potential route to construct long-lived robust boundary qubits that survive to infinite temperature in disorder-free systems.
The quantum brachistochrone problem addresses the fundamental challenge of achieving the quantum speed limit in applications aiming to realize a given unitary operation in a quantum system. Specifically, it looks into optimization of the transformation of quantum states through controlled Hamiltonians, which form a small subset in the space of the system's observables. Here we introduce a broad family of completely integrable brachistochrone protocols, which arise from a judicious choice of the control Hamiltonian subset. Furthermore, we demonstrate how the inherent stability of the completely integrable protocols makes them numerically tractable and therefore practicable as opposed to their non-integrable counterparts.
We introduce Equivariant Neural Eikonal Solvers, a novel framework that integrates Equivariant Neural Fields (ENFs) with Neural Eikonal Solvers. Our approach employs a single neural field where a unified shared backbone is conditioned on signal-specific latent variables - represented as point clouds in a Lie group - to model diverse Eikonal solutions. The ENF integration ensures equivariant mapping from these latent representations to the solution field, delivering three key benefits: enhanced representation efficiency through weight-sharing, robust geometric grounding, and solution steerability. This steerability allows transformations applied to the latent point cloud to induce predictable, geometrically meaningful modifications in the resulting Eikonal solution. By coupling these steerable representations with Physics-Informed Neural Networks (PINNs), our framework accurately models Eikonal travel-time solutions while generalizing to arbitrary Riemannian manifolds with regular group actions. This includes homogeneous spaces such as Euclidean, position-orientation, spherical, and hyperbolic manifolds. We validate our approach through applications in seismic travel-time modeling of 2D, 3D, and spherical benchmark datasets. Experimental results demonstrate superior performance, scalability, adaptability, and user controllability compared to existing Neural Operator-based Eikonal solver methods.
Neural quantum states (NQS) provide flexible wavefunction parameterizations for numerical studies of quantum many-body physics. While inspired by deep learning, it remains unclear to what extent NQS share characteristics with neural networks used for standard machine learning tasks. We demonstrate that NQS exhibit the double descent phenomenon, a key feature of modern deep learning, where generalization worsens as network size increases before improving again in an overparameterized regime. Notably, we find the second descent to occur only for network sizes much larger than the Hilbert space dimension, indicating that NQS typically operate in an underparameterized regime, where increasing network size can degrade generalization. Our analysis reveals that the optimal network size in this regime depends on the number of unique training samples, highlighting the importance of sampling strategies. These findings suggest the need for symmetry-aware, physics-informed architecture design, rather than directly adopting machine learning heuristics.
University of Toronto logoUniversity of TorontoUniversity of Amsterdam logoUniversity of AmsterdamCalifornia Institute of Technology logoCalifornia Institute of TechnologyUniversity of Illinois at Urbana-Champaign logoUniversity of Illinois at Urbana-ChampaignUniversity of OsloUniversity of Cambridge logoUniversity of CambridgeUniversity of ZurichUniversity of Southern California logoUniversity of Southern CaliforniaUniversity of Chicago logoUniversity of ChicagoTel Aviv University logoTel Aviv UniversityUniversity College London logoUniversity College LondonUniversity of Oxford logoUniversity of OxfordUniversity of California, Irvine logoUniversity of California, IrvineUniversity of Copenhagen logoUniversity of CopenhagenUniversity of EdinburghUniversity of British Columbia logoUniversity of British ColumbiaUniversity of CreteKavli Institute for the Physics and Mathematics of the UniverseUniversity of Florida logoUniversity of FloridaINFN Sezione di PisaSpace Telescope Science Institute logoSpace Telescope Science InstituteInstitute for Advanced StudyUniversité Paris-Saclay logoUniversité Paris-SaclayHelsinki Institute of PhysicsStockholm University logoStockholm UniversityUniversity of HelsinkiThe University of ManchesterUniversité de GenèveAalto University logoAalto UniversityQueen Mary University of London logoQueen Mary University of LondonUniversity of PortsmouthMax Planck Institute for AstrophysicsUniversity of IcelandUniversity of NaplesUniversiteit LeidenUniversity of SussexDurham University logoDurham UniversityNiels Bohr InstituteUniversity of JyväskyläUniversity of PadovaInstituto de Astrofísica de CanariasUniversity of the WitwatersrandUniversity of NottinghamEuropean Space AgencyUniversity of Cape TownUniversity of LisbonINFN, Sezione di TorinoPontificia Universidad Católica de ChileDublin Institute for Advanced StudiesJodrell Bank Centre for AstrophysicsINFN, Laboratori Nazionali di FrascatiUniversity of the Basque CountryUniversity of Hawai’iINFN, Sezione di MilanoUniversity of KwaZulu-NatalLudwig-Maximilians-UniversitätInstituto de Astrofísica de Andalucía-CSICUniversity of the Western CapeINAF – Istituto di Astrofisica Spaziale e Fisica Cosmica MilanoLaboratoire d’Astrophysique de MarseilleKavli IPMU (WPI), UTIAS, The University of TokyoMax-Planck Institut für extraterrestrische PhysikINAF-Istituto di RadioastronomiaINAF - Osservatorio di Astrofisica e Scienza dello SpazioLebanese UniversityCambridge UniversityUniversité de MarseilleINFN - Sezione di PadovaINAF-IASF MilanoCosmic Dawn CenterINFN-Sezione di BolognaINFN Sezione di RomaINAF-Osservatorio Astronomico di BolognaINFN Sezione di Roma Tor VergataNational Astronomical Observatories of ChinaSISSA - Scuola Internazionale Superiore di Studi AvanzatiUniversité de LausanneCEA Paris-SaclayUniversity of Oslo, Institute of Theoretical AstrophysicsParis SaclayNational Institute for Physics and Nuclear EngineeringExeter UniversityUniversity of Helsinki, Department of PhysicsUniversité Paris-Saclay, CNRSUniversité de Genève, Département d’AstronomieParis Institute of AstrophysicsAPC, UMR 7164, Université Paris Cité, CNRSInstitute for Advanced Study, Einstein DriveUniversité de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, FranceINAF - Istituto di Radioastronomia, Istituto Nazionale di AstrofisicaINAF - Osservatorio di Astrofisica e Scienza dello Spazio, Istituto Nazionale di AstrofisicaINAF - Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Istituto Nazionale di AstrofisicaUniversity of Helsinki, Department of Physics, and Helsinki Institute of PhysicsINFN-Sezione di Roma TreINFN-Sezione di FerraraUniversit de ParisUniversit Claude Bernard Lyon 1INAF Osservatorio Astronomico di CapodimonteUniversit Lyon 1Instituto de Física Teórica, (UAM/CSIC)RWTH Aachen UniversityINAF Osservatorio Astrofisico di ArcetriUniversit degli Studi di MilanoINAF Osservatorio Astronomico di PadovaUniversit de MontpellierINAF Osservatorio di Astrofisica e Scienza dello Spazio di BolognaUniversit Di BolognaUniversit de Grenoble-AlpesINFN Sezione di TriesteINAF ` Osservatorio Astronomico di TriesteINFN Sezione di FirenzeNorwegian University of Science and TechnologyINAF Osservatorio Astronomico di BreraUniversity of Milano Bicocca
The Euclid mission of the European Space Agency will deliver weak gravitational lensing and galaxy clustering surveys that can be used to constrain the standard cosmological model and extensions thereof. We present forecasts from the combination of these surveys on the sensitivity to cosmological parameters including the summed neutrino mass MνM_\nu and the effective number of relativistic species NeffN_{\rm eff} in the standard Λ\LambdaCDM scenario and in a scenario with dynamical dark energy ($w_0 w_a$CDM). We compare the accuracy of different algorithms predicting the nonlinear matter power spectrum for such models. We then validate several pipelines for Fisher matrix and MCMC forecasts, using different theory codes, algorithms for numerical derivatives, and assumptions concerning the non-linear cut-off scale. The Euclid primary probes alone will reach a sensitivity of σ(Mν)=\sigma(M_\nu)=56meV in the Λ\LambdaCDM+MνM_\nu model, whereas the combination with CMB data from Planck is expected to achieve σ(Mν)=\sigma(M_\nu)=23meV and raise the evidence for a non-zero neutrino mass to at least the 2.6σ2.6\sigma level. This can be pushed to a 4σ4\sigma detection if future CMB data from LiteBIRD and CMB Stage-IV are included. In combination with Planck, Euclid will also deliver tight constraints on $\Delta N_{\rm eff}< 0.144(95 (95%CL) in the \LambdaCDM+CDM+M_\nu++N_{\rm eff}model,or model, or \Delta N_{\rm eff}< 0.063whenfutureCMBdataareincluded.Whenfloating when future CMB data are included. When floating (w_0, w_a),wefindthatthesensitivityto, we find that the sensitivity to N_{\rm eff}$ remains stable, while that to MνM_\nu degrades at most by a factor 2. This work illustrates the complementarity between the Euclid spectroscopic and imaging/photometric surveys and between Euclid and CMB constraints. Euclid will have a great potential for measuring the neutrino mass and excluding well-motivated scenarios with additional relativistic particles.
When metals are magnetized, emulsions phase separate, or galaxies cluster, domain walls and patterns form and irremediably coarsen over time. Such coarsening is universally driven by diffusive relaxation toward equilibrium. Here, we discover an inertial counterpart - wave coarsening - in active elastic media, where vibrations emerge and spontaneously grow in wavelength, period, and amplitude, before a globally synchronized state called a time crystal forms. We observe wave coarsening in one- and two-dimensional solids and capture its dynamical scaling. We further arrest the process by breaking momentum conservation and reveal a far-from-equilibrium nonlinear analogue to chiral topological edge modes. Our work unveils the crucial role of symmetries in the formation of time crystals and opens avenues for the control of nonlinear vibrations in active materials.
We investigate the performance of error mitigation via measurement of conserved symmetries on near-term devices. We present two protocols to measure conserved symmetries during the bulk of an experiment, and develop a zero-cost post-processing protocol which is equivalent to a variant of the quantum subspace expansion. We develop methods for inserting global and local symetries into quantum algorithms, and for adjusting natural symmetries of the problem to boost their mitigation against different error channels. We demonstrate these techniques on two- and four-qubit simulations of the hydrogen molecule (using a classical density-matrix simulator), finding up to an order of magnitude reduction of the error in obtaining the ground state dissociation curve.
Encoding quantum information in quantum states with disjoint wave-function support and noise insensitive energies is the key behind the idea of qubit protection. While fully protected qubits are expected to offer exponential protection against both energy relaxation and pure dephasing, simpler circuits may grant partial protection with currently achievable parameters. Here, we study a fluxonium circuit in which the wave-functions are engineered to minimize their overlap while benefiting from a first-order-insensitive flux sweet spot. Taking advantage of a large superinductance (L1 μHL\sim 1~\mu \rm{H}), our circuit incorporates a resonant tunneling mechanism at zero external flux that couples states with the same fluxon parity, thus enabling bifluxon tunneling. The states 0|0\rangle and 1|1\rangle are encoded in wave-functions with parities 0 and 1, respectively, ensuring a minimal form of protection against relaxation. Two-tone spectroscopy reveals the energy level structure of the circuit and the presence of 4π4 \pi quantum-phase slips between different potential wells corresponding to m=±1m=\pm 1 fluxons, which can be precisely described by a simple fluxonium Hamiltonian or by an effective bifluxon Hamiltonian. Despite suboptimal fabrication, the measured relaxation (T1=177±3 μsT_1 = 177\pm 3 ~\mu s) and dephasing (T2E=75±5 μsT_2^E = 75\pm 5~\mu \rm{s}) times not only demonstrate the relevance of our approach but also opens an alternative direction towards quantum computing using partially-protected fluxonium qubits.
This is the third in a series of three papers in which we study a lattice gas subject to Kawasaki dynamics at inverse temperature \beta&gt;0 in a large finite box ΛβZ2\Lambda_\beta \subset \mathbb{Z}^2 whose volume depends on β\beta. Each pair of neighbouring particles has a negative binding energy -U&lt;0, while each particle has a positive activation energy \Delta&gt;0. The initial configuration is drawn from the grand-canonical ensemble restricted to the set of configurations where all the droplets are subcritical. Our goal is to describe, in the metastable regime Δ(U,2U)\Delta \in (U,2U) and in the limit as β\beta\to\infty, how and when the system nucleates, i.e., creates a critical droplet somewhere in Λβ\Lambda_\beta that subsequently grows by absorbing particles from the surrounding gas. In the first paper we showed that subcritical droplets behave as quasi-random walks. In the second paper we used the results in the first paper to analyse how subcritical droplets form and dissolve on multiple space-time scales when the volume is moderately large, namely, $|\Lambda_\beta| = \mathrm{e}^{\theta\beta}with with \Delta < \theta < 2\Delta-U$. In the present paper we consider the setting where the volume is very large, namely, Λβ=eΘβ|\Lambda_\beta| = \mathrm{e}^{\Theta\beta} with $\Delta < \Theta < \Gamma-(2\Delta-U),where, where \Gamma$ is the energy of the critical droplet in the local model with fixed volume, and use the results in the first two papers to identify the nucleation time and the tube of typical trajectories towards nucleation. We will see that in a very large volume critical droplets appear more or less independently in boxes of moderate volume, a phenomenon referred to as homogeneous nucleation. One of the key ingredients in the proof is an estimate showing that no information can travel between these boxes on relevant time scales.
Symmetric quantum states are fascinating objects. They correspond to multipartite systems that remain invariant under particle permutations. This symmetry is reflected in their compact mathematical characterisation but also in their unique physical properties: they exhibit genuine multipartite entanglement and notable robustness against noise and perturbations. These features make such states particularly well-suited for a wide range of quantum information tasks. Here, we provide a pedagogic analysis of the mathematical structure and relevant physical properties of this class of states. Beyond the theoretical framework, robust tools for certifying and verifying the properties of symmetric states in experimental settings are essential. In this regard, we explore how standard techniques -- such as quantum state tomography, Bell tests, and entanglement witnesses -- can be specifically adapted for symmetric systems. Next, we provide an up-to-date overview of the most relevant applications in which these states outperform other classes of states in specific tasks. Specifically, we address their central role in quantum metrology, highlight their use in quantum error correction codes, and examine their contribution in computation and communication tasks. Finally, we present the current state-of-the-art in their experimental generation, ranging from systems of cold atoms to implementations via quantum algorithms. We also review the most significant results obtained in the different experimental realizations. Despite the notable progress made in recent years with regard to the characterisation and application of symmetric quantum states, several intriguing questions remain unsolved. We conclude this review by discussing some of these open problems and outlining promising directions for future research.
Collective epithelial migration leverages on topological rearrangements of the intercellular junctions, which allow cells to intercalate without loosing confluency. In silico studies have provided a clear indication that this process could occur via a two-step phase transition, where a hierarchy of topological excitations progressively transforms an epithelial layer from a crystalline solid to an isotropic liquid, via an intermediate hexatic liquid crystal phase. Yet, the fundamental mechanism behind this process and its implications for collective cell behavior are presently unknown. In this article, we show that the onset of collective cell migration in cell-resolved models of epithelial layers takes place via an activity-driven melting transition, characterized by an exponentially-divergent correlation length across the solid/hexatic phase boundary. Using a combination of numerical simulations and Renormalization Group analysis, we show that the availability of topologically distinct rearrangements - known as T1 and T2 processes - and of a non-thermal route to melting, renders the transition significantly more versatile and tunable than in two-dimensional passive matter. Specifically, the relative frequency of T1 and T2 processes and of the "bare" stiffness of the cell layer affect the divergence of positional correlations within a well-defined spectrum of critical behaviors. Suppressing T1 processes, changes the nature of the transition by preventing collective migration in favor of a cellular analog of surface sublimation.
There are no more papers matching your filters at the moment.