I investigate spacetime singularities from the point of view of the wavefunction of the universe. In order to extend the classical notion of geodesic incompleteness one has to include the proper time of an observer as a degree of freedom in the Wheeler DeWitt equation. This leads to a Schrödinger equation along the observer worldline. Near the singularity, as in the classical BLK treatment, I ignore spatial gradients and effectively describe the spacetime around the worldline in the mini-superspace approximation. Then the problem proves identical to a spherically symmetric scattering of a quantum particle off a central potential and singularity avoidance is tantamount to unitary evolution for this system. Standard types of matter (dust, radiation) correspond to regular potentials and thus lead to a bounce. The most singular component, spatial anisotropy, is associated to a conserved charge and yields a negative inverse-square potential -- like standard angular momentum, but with opposite sign. This potential is critical, in that the unitarity of the evolution depends on the actual numerical factor in front of it, i.e., on the anisotropy charge.
Hadronic vacuum polarization at low virtualities limits the precision of experimental tests of the standard model via important physical observables. Here we compute that effect in two-flavor chiral perturbation theory to three loops. Among the master integrals that describe the amplitude, six are elliptic functions of the momentum. Of these five are new to this work, although all can be related to the three-loop sunset integral. The renormalizability of the amplitude hinges on relations between the master integrals that were not previously known and that are not consequences of the integration-by-parts reduction. Our result is intended to serve as a starting point for phenomenological calculations, as well as the computation of finite-volume corrections in lattice QCD.
We present a new analysis of cosmic dipole anisotropy using gamma-ray bursts (GRBs) as high-redshift standardizable candles. GRBs are ideal probes for testing the cosmological principle thanks to their high luminosity, wide redshift range, and nearly isotropic sky coverage. For the first time, we employ the luminosity-time (L-T) relation, known in the literature as the bidimensional X-ray Dainotti relation, corrected for redshift evolution, to standardize a sample of 176 long GRBs detected by \textit{Swift}. We test for dipolar modulations in the GRB Hubble diagram using both the Dipole Fit Method and a new approach introduced here, the Anisotropic Residual Analysis Method. Both methods yield consistent results: a dipole amplitude of Ad0.6±0.2A_d \simeq 0.6 \pm 0.2 pointing towards (RA, DEC) (134±30,36±21)\approx (134^\circ \pm 30^{\circ}, -36^\circ \pm 21^{\circ}) (equatorial coordinates). As shown in the Appendix, this corresponds to a boost velocity of the observer with respect to the GRB rest-frame in the antipodal direction from the dipole direction. Extensive isotropy tests and 20,000 Monte Carlo simulations confirm that the detected signal cannot be explained by chance alignments or by the angular distribution of the GRB sample. We also show how, by incorporating a dipole term, residual correlations are eliminated, showing that the dipole model provides a better fit than standard isotropic Λ\LambdaCDM.
The proper time of an observer can be introduced as a degree of freedom in quantum cosmology, additional to the existing fields. We review two arguments for using the Schrödinger equation to evolve the corresponding wavefunction. We restrict to solutions in which time acts as a component with negligible backreaction on the metric -- that is, it plays the role of a test field. We apply this idea to various minisuperspace models. In the semiclassical regime we recover expected results: the wavefunction peaks on the classical solution and, in models with a scalar field, the variance of ζ\zeta (a mini-superspace analogue of the comoving curvature perturbation) is conserved. Applied to the no-boundary wavefunction, our model recovers the bouncing behavior of classical global de Sitter space, with small corrections associated to the evolving variance of the wavefunction. Other bouncing solutions do not have any classical analogue. This is the case of a radiation dominated universe, which classically leads to a big-bang singularity but corresponds quantum mechanically to an ss-wave scattering off a central potential of the form r2/3-r^{-2/3}. As much as the hydrogen atom, this potential is famously made stable by the Heisenberg uncertainty principle. We study the unitary evolution of the wavepacket numerically. During the bounce, the uncertainty and the expectation value of the scale factor become comparable. By selecting a large initial variance, the bounce can be made arbitrarily smooth, the mean value of the Hubble parameter correspondingly soft.
In this work we investigate an inverse coefficient problem for the one-dimensional subdiffusion model, which involves a Caputo fractional derivative in time. The inverse problem is to determine two coefficients and multiple parameters (the order, and length of the interval) from one pair of lateral Cauchy data. The lateral Cauchy data are given on disjoint sets in time with a single excitation and the measurement is made on a time sequence located outside the support of the excitation. We prove two uniqueness results for different lateral Cauchy data. The analysis is based on the solution representation, analyticity of the observation and a refined version of inverse Sturm-Liouville theory due to Sini [35]. Our results heavily exploit the memory effect of fractional diffusion for the unique recovery of the coefficients in the model. Several numerical experiments are also presented to complement the analysis.
We propose a setup to directly measure the anyonic statistical angle on a single edge of a fractional quantum Hall system, without requiring independent knowledge of non-universal parameters. We consider a Laughlin edge state bent into a closed loop geometry, where tunneling processes are controllably induced between the endpoints of the loop. To illustrate the underlying physical mechanism, we compute the time-dependent current generated by the injection of multiple anyons, and show that its behavior exhibits distinctive features governed by the anyonic statistical angle. The measured current reflects quantum interference effects due to the time-resolved braiding of anyons at the junction. To establish experimental relevance, we introduce a protocol where anyons are probabilistically injected upstream of the loop via a quantum point contact (QPC) source. Unlike in Fabry-Perot interferometers, where phase jumps occur spontaneously due to stochastic quasi-particle motion, here the phase jumps are deliberately induced by source injections. These events imprint measurable signatures in the cross-correlation noise, enabling a controlled statistical analysis of the braiding phase. We further show that, by varying the magnetic field while remaining within the same fractional quantum Hall plateau, the statistical angle can be extracted without relying on the knowledge of other non-universal system parameters. Our results provide a minimal and accessible platform for probing anyonic statistics using a single chiral edge.
We extend the scope of the dynamical theory of extreme values to cover phenomena that do not happen instantaneously, but evolve over a finite, albeit unknown at the onset, time interval. We consider complex dynamical systems, composed of many individual subsystems linked by a network of interactions. As a specific example of the general theory, a model of neural network, introduced to describe the electrical activity of the cerebral cortex, is analyzed in detail: on the basis of this analysis we propose a novel definition of neuronal cascade, a physiological phenomenon of primary importance. We derive extreme value laws for the statistics of these cascades, both from the point of view of exceedances (that satisfy critical scaling theory) and of block maxima.
The mixing of neutral mesons is sensitive to some of the highest scales probed in laboratory experiments. In light of the planned LHCb Upgrade II, a possible upgrade of Belle II, and the broad interest in flavor physics in the tera-ZZ phase of the proposed FCC-ee program, we study constraints on new physics contributions to BdB_d and BsB_s mixings which can be obtained in these benchmark scenarios. We explore the limitations of this program, and identify the measurement of Vcb|V_{cb}| as one of the key ingredients in which progress beyond current expectations is necessary to maximize future sensitivity. We speculate on possible solutions to this bottleneck. Given the current tension with the standard model (SM) in semileptonic BB decays, we explore how its resolution may impact the search for new physics in mixing. Even if new physics has the same CKM and loop suppressions of flavor changing processes as the SM, the sensitivity will reach 2 TeV, and it can be much higher if any SM suppressions are lifted. We illustrate the discovery potential of this program.
``Do Carroll particles move?'' The answer depends on the characteristics of the particle such as its mass, spin, electric charge, and magnetic moment. A massive Carroll particle (closely related to fractons) does not move; its immobility follows from Carroll boost symmetry which implies dipole conservation, but not conversely. A massless Carroll particle may propagate by following the Hall law, consistently with the partial breaking of the Carroll boost symmetry. The framework is extended to Carroll field theory. In d=2d=2 space dimensions, the Carroll group has a two-fold central extension which allows us to generalize the dynamics to massive and massless particles, including anyons. The anyonic spin and magnetic moment combine with the doubly-extended structure parameterized by two Casimir invariants interpreted as intrinsic magnetization and non-commutativity parameter. The extended Carroll particle subjected to an electromagnetic background field moves following a generalized Hall law which includes a Zeeman force. This theory is illustrated by massless, uncharged anyons with doubly-centrally extended structure we call exotic photons, which move on the horizon of a Black Hole, giving rise to an anyonic spin-Hall Effect.
We simulate Nf=2+1N_f=2+1 QCD at the physical point combining open and periodic boundary conditions in a parallel tempering framework, following the original proposal by M. Hasenbusch for 2d2d CPN1\mathrm{CP}^{N-1} models, which has been recently implemented and widely employed in 4d4d SU(N)\mathrm{SU}(N) pure Yang-Mills theories too. We show that using this algorithm it is possible to achieve a sizable reduction of the auto-correlation time of the topological charge in dynamical fermions simulations both at zero and finite temperature, allowing to avoid topology freezing down to lattice spacings as fine as a0.02a \sim 0.02 fm. Therefore, this implementation of the Parallel Tempering on Boundary Conditions algorithm has the potential to substantially push forward the investigation of the QCD vacuum properties by means of lattice simulations.
Tunneling processes offer a promising path for finding signatures of quantum gravity. While tunneling of geometry has long been recognized in the literature, few detailed analyses have been carried out. We investigate covariant Loop Quantum Gravity transitions in the holonomy representation, which naturally encodes the extrinsic curvature of boundary states. We study these amplitudes within the simple framework of the Ponzano-Regge spinfoam model for three-dimensional Euclidean quantum gravity. Having identified the geometries that dominate the spinfoam path integral in the classically forbidden regime when it is formulated in terms of dihedral angles as boundary data, we argue that they correspond to tunneling processes. Finally, we characterize these non-classical geometries and show that their contributions to the spinfoam amplitude are exponentially suppressed in the semiclassical limit via analytic continuation of the discrete gravity action. This work sheds light on quantum black-to-white-hole transitions, in particular clarifying the origin of the exponential suppression of various quantum amplitudes.
The operational formulations of quantum theory are drastically time oriented. However, to the best of our knowledge, microscopic physics is time-symmetric. We address this tension by showing that the asymmetry of the operational formulations does not reflect a fundamental time-orientation of physics. Instead, it stems from built-in assumptions about the usersusers of the theory. In particular, these formalisms are designed for predicting the future based on information about the past, and the main mathematical objects contain implicit assumption about the past, but not about the future. The main asymmetry in quantum theory is the difference between knowns and unknowns.
In recent years, the anomalous magnetic moment of the muon has triggered a lot of activity in the lattice QCD community because a persistent tension of about 3.5 σ3.5~\sigma is observed between the phenomenological estimate and the Brookhaven measurement. The current best phenomenological estimate has an uncertainty comparable to the experimental one and the error is completely dominated by hadronic effects: the leading order hadronic vacuum polarization (HVP) contribution and the hadronic light-by-light (HLbL) scattering contribution. Both are accessible via lattice simulations and a reduction of the error by a factor 4 is required in view of the forthcoming experiments at Fermilab and J-PARC whose results, expected in the next few years, should reduce the experimental precision down to the level of 0.14 0.14~ppm. In this article, I review the status of lattice calculations of those quantities, starting with the HVP. This contribution has now reached sub-percent precision and requires a careful understanding of all sources of systematic errors. The HLbL contribution, that is much smaller, still contributes significantly to the error. This contribution is more challenging to compute, but rapid progress has been made on the lattice in the last few years.
We study the effect of non-Gaussian average over the random couplings in a complex version of the celebrated Sachdev-Ye-Kitaev (SYK) model. Using a Polchinski-like equation and random tensor Gaussian universality, we show that the effect of this non-Gaussian averaging leads to a modification of the variance of the Gaussian distribution of couplings at leading order in N. We then derive the form of the effective action to all orders. An explicit computation of the modification of the variance in the case of a quartic perturbation is performed for both the complex SYK model mentioned above and the SYK generalization proposed in D. Gross and V. Rosenhaus, JHEP 1702 (2017) 093.
This document describes and analyzes a system for secure and privacy-preserving proximity tracing at large scale. This system, referred to as DP3T, provides a technological foundation to help slow the spread of SARS-CoV-2 by simplifying and accelerating the process of notifying people who might have been exposed to the virus so that they can take appropriate measures to break its transmission chain. The system aims to minimise privacy and security risks for individuals and communities and guarantee the highest level of data protection. The goal of our proximity tracing system is to determine who has been in close physical proximity to a COVID-19 positive person and thus exposed to the virus, without revealing the contact's identity or where the contact occurred. To achieve this goal, users run a smartphone app that continually broadcasts an ephemeral, pseudo-random ID representing the user's phone and also records the pseudo-random IDs observed from smartphones in close proximity. When a patient is diagnosed with COVID-19, she can upload pseudo-random IDs previously broadcast from her phone to a central server. Prior to the upload, all data remains exclusively on the user's phone. Other users' apps can use data from the server to locally estimate whether the device's owner was exposed to the virus through close-range physical proximity to a COVID-19 positive person who has uploaded their data. In case the app detects a high risk, it will inform the user.
We apply the Distillation spatial smearing program to the extraction of the unpolarized isovector valence PDF of the nucleon. The improved volume sampling and control of excited-states afforded by distillation leads to a dramatically improved determination of the requisite Ioffe-time Pseudo-distribution (pITD). The impact of higher-twist effects is subsequently explored by extending the Wilson line length present in our non-local operators to one half the spatial extent of the lattice ensemble considered. The valence PDF is extracted by analyzing both the matched Ioffe-time Distribution (ITD), as well as a direct matching of the pITD to the PDF. Through development of a novel prescription to obtain the PDF from the pITD, we establish a concerning deviation of the pITD from the expected DGLAP evolution of the pseudo-PDF. The presence of DGLAP evolution is observed once more following introduction of a discretization term into the PDF extractions. Observance and correction of this discrepancy further highlights the utility of distillation in such structure studies.
We introduce a Hamiltonian framework tailored to degrees of freedom (DOF) of field theories that reside in suitable 3-dimensional open regions, and then apply it to the gravitational DOF of general relativity. Specifically, these DOF now refer to open regions of null infinity, and of black hole (and cosmological) horizons representing equilibrium situations. At null infinity the new Hamiltonian framework yields the well-known BMS fluxes and charges. By contrast, all fluxes vanish identically at black hole (and cosmological) horizons just as one would physically expect. In a companion paper we showed that, somewhat surprisingly, the geometry and symmetries of these two physical configurations descend from a common framework. This paper reinforces that theme: Very different physics emerges in the two cases from a common Hamiltonian framework because of the difference in the nature of degrees of freedom. Finally, we compare and contrast this Hamiltonian approach with those available in the literature.
We use the effective field theory of dark energy (EFT of DE) formalism to constrain dark energy models belonging to the Horndeski class with the recent Planck 2015 CMB data. The space of theories is spanned by a certain number of parameters determining the linear cosmological perturbations, while the expansion history is set to that of a standard Λ\LambdaCDM model. We always demand that the theories be free of fatal instabilities. Additionally, we consider two optional conditions, namely that scalar and tensor perturbations propagate with subliminal speed. Such criteria severely restrict the allowed parameter space and are thus very effective in shaping the posteriors. As a result, we confirm that no theory performs better than Λ\LambdaCDM when CMB data alone are analysed. Indeed, the healthy dark energy models considered here are not able to reproduce those phenomenological behaviours of the effective Newton constant and gravitational slip parameters that, according to previous studies, best fit the data.
The Einstein equations allow solutions containing closed timelike curves. These have generated much puzzlement and suspicion that they could imply paradoxes. I show that puzzlement and paradoxes disappears if we discuss carefully the physics of the irreversible phenomena in the context of these solutions.
We illustrate the mathematical theory of entropy production in repeated quantum measurement processes developed in a previous work by studying examples of quantum instruments displaying various interesting phenomena and singularities. We emphasize the role of the thermodynamic formalism, and give many examples of quantum instruments whose resulting probability measures on the space of infinite sequences of outcomes (shift space) do not have the (weak) Gibbs property. We also discuss physically relevant examples where the entropy production rate satisfies a large deviation principle but fails to obey the central limit theorem and the fluctuation-dissipation theorem. Throughout the analysis, we explore the connections with other, a priori unrelated topics like functions of Markov chains, hidden Markov models, matrix products and number theory.
There are no more papers matching your filters at the moment.