Universidade do Minho
05 Dec 2024
In this work, we develop a symmetry-based classification of Chern phases in honeycomb photonic crystals, considering arbitrary nonreciprocal couplings compatible with energy conservation. Our analysis focuses on crystals formed through nonreciprocal perturbations of photonic graphene. These perturbations, which can have arbitrary spatial variations, are generally described by scalar and vector fields. Using a tight-binding model, we consider the most general nonreciprocal interactions, including gyromagnetic, pseudo-Tellegen, and moving medium responses, and examine how the corresponding nonreciprocal fields influence the crystal's topology. Our findings reveal that nonreciprocal interactions alone are insufficient to induce a topologically nontrivial phase. Instead, a nontrivial p6m component in the nonreciprocal fields is required to open a bandgap and achieve a non-zero Chern number. These results provide a symmetry-based roadmap for engineering photonic topological phases via nonreciprocal perturbations of photonic graphene, offering practical guidelines for designing topological phases in graphene-like photonic crystals.
We use effective string theory (EST) to describe a toroidal 2d domain wall embedded in a 3d torus. In particular, we compute the free energy of the domain wall in an expansion in inverse powers of the area, up to the second non-universal order that involves the Wilson coefficient γ3\gamma_3. In order to test our predictions, we simulate the 3d Ising model with anti-periodic boundary conditions, using a two-step flat-histogram Monte Carlo method in an ensemble over the boundary coupling JJ that delivers high-precision free energy data. The predictions from EST reproduce the lattice results with only two adjustable parameters: the string tension, 1/s21/\ell_s^2, and γ3\gamma_3. We find γ3/γ3min=0.82(15)\gamma_3 /|\gamma_3^{\text{min}}|= -0.82(15), which is compatible with previous estimates.
The concept of prominence is familiar to signal engineers, topographers and mountaineers. We introduce Prominence P\mathcal P as a discriminator of gravitational wave (GW) signals. We treat black hole and neutron star binaries as astrophysical background sources, and show how P\mathcal P can be used to distinguish between GW spectra produced by first-order phase transitions, domain walls and cosmic strings, and combinations thereof. Prominence can also be used to discriminate between these and off-piste sources of GWs. The uncertainty in the measured energy density in GWs at Pulsar Timing Arrays needs to be smaller than 4%\sim 4\% for P\mathcal{P} to achieve discrimination at 3σ\sigma. LISA and ET data are expected to have sufficiently small uncertainties that Prominence can play a central role in their analysis.
We show that short-range interactions are irrelevant around gapless ground-state delocalization-localization transitions driven by quasiperiodicity in interacting fermionic chains. In the presence of interactions, these transitions separate Luttinger Liquid and Anderson glass phases. Remarkably, close to criticality, we find that excitations become effectively non-interacting. By formulating a many-body generalization of a recently developed method to obtain single-particle localization phase diagrams, we carry out precise calculations of critical points between Luttinger Liquid and Anderson glass phases and find that the correlation length critical exponent takes the value ν=1.001±0.007\nu = 1.001 \pm 0.007, compatible with ν=1\nu=1 known exactly at the non-interacting critical point. We also show that other critical exponents, such as the dynamical exponent zz and a many-body analog of the fractal dimension are compatible with the exponents obtained at the non-interacting critical point. Noteworthy, we find that the transitions are accompanied by the emergence of a many-body generalization of previously found single-particle hidden dualities. Finally, we show that in the limit of vanishing interaction strength, all finite range interactions are irrelevant at the non-interacting critical point.
Quasiperiodic moiré materials provide a new platform for realizing critical electronic states, yet a direct and experimentally practical method to characterize this criticality has been lacking. We show that a multifractal analysis of the local density of states (LDOS), accessible via scanning tunneling microscopy, offers an unambiguous signature of criticality from a single experimental sample. Applying this approach to a one-dimensional quasiperiodic model, a stringent test case due to its fractal energy spectrum, we find a clear distinction between the broad singularity spectra f(α)f\left(\alpha\right) of critical states and the point-like spectra of extended states. We further demonstrate that these multifractal signatures remain robust over a wide range of energy broadenings relevant to experiments. Our results establish a model-independent, experimentally feasible framework for identifying and probing multifractality in the growing family of quasiperiodic and moiré materials.
The physical properties of a quantum many-body system can, in principle, be determined by diagonalizing the respective Hamiltonian, but the dimensions of its matrix representation scale exponentially with the number of degrees of freedom. Hence, only small systems that are described through simple models can be tackled via exact diagonalization. To overcome this limitation, numerical methods based on the renormalization group paradigm that restrict the quantum many-body problem to a manageable subspace of the exponentially large full Hilbert space have been put forth. A striking example is the density-matrix renormalization group (DMRG), which has become the reference numerical method to obtain the low-energy properties of one-dimensional quantum systems with short-range interactions. Here, we provide a pedagogical introduction to DMRG, presenting both its original formulation and its modern tensor-network-based version. This colloquium sets itself apart from previous contributions in two ways. First, didactic code implementations are provided to bridge the gap between conceptual and practical understanding. Second, a concise and self-contained introduction to the tensor network methods employed in the modern version of DMRG is given, thus allowing the reader to effortlessly cross the deep chasm between the two formulations of DMRG without having to explore the broad literature on tensor networks. We expect this pedagogical review to find wide readership amongst students and researchers who are taking their first steps in numerical simulations via DMRG.
We demonstrate that Gaia's detection of stars on wide orbits around black holes opens a new observational window on dark matter structures -- such as scalar clouds and dark matter spikes -- predicted in a range of theoretical scenarios. Using precise radial velocity measurements of these systems, we derive state-of-the-art constraints on dark matter density profiles and particle masses in previously unexplored regions of parameter space. We also test the black hole hypothesis against the alternative of a boson star composed of light scalar fields.
Rendering on conventional computers is capable of generating realistic imagery, but the computational complexity of these light transport algorithms is a limiting factor of image synthesis. Quantum computers have the potential to significantly improve rendering performance through reducing the underlying complexity of the algorithms behind light transport. This paper investigates hybrid quantum-classical algorithms for ray tracing, a core component of most rendering techniques. Through a practical implementation of quantum ray tracing in a 3D environment, we show quantum approaches provide a quadratic improvement in query complexity compared to the equivalent classical approach. Based on domain specific knowledge, we then propose algorithms to significantly reduce the computation required for quantum ray tracing through exploiting image space coherence and a principled termination criteria for quantum searching. We show results for both Whitted style ray tracing, and for accelerating ray tracing operations when performing classical Monte Carlo integration for area lights and indirect illumination.
Analysis of motion algorithms for autonomous systems operating under variable external conditions leads to the concept of parametrized topological complexity \cite{CFW}. In \cite{CFW}, \cite{CFW2} the parametrized topological complexity was computed in the case of the Fadell - Neuwirth bundle which is pertinent to algorithms of collision free motion of many autonomous systems in Rd{\Bbb R}^d avoiding collisions with multiple obstacles. The parametrized topological complexity of sphere bundles was studied in detail in \cite{FW}, \cite{FW2}, \cite{FP}. In this paper we make the next step by studying parametrized topological complexity of bundles of real projective spaces which arise as projectivisations of vector bundles. This leads us to new problems of algebraic topology involving theory of characteristic classes and geometric topology. We establish sharp upper bounds for parametrized topological complexity TC[p:EB]{\sf TC}[p:E\to B] improving the general upper bounds. We develop algebraic machinery for computing lower bounds for TC[p:EB]{\sf TC}[p:E\to B] based on the Stiefel - Whitney characteristic classes. Combining the lower and the upper bounds we compute explicitly many specific examples.
We experimentally demonstrate a testing strategy for boson samplers that is based on efficiently computable expressions for the output photon counting distributions binned over multiple optical modes. We apply this method to validate boson sampling experiments with three photons on a reconfigurable photonic chip, which implements a four-mode interferometer, analyzing 50 Haar-random unitary transformations while tuning photon distinguishability via controlled delays. We show that for high values of indistinguishability, the experiment accurately reproduces the ideal boson sampling binned-mode distributions, which exhibit variations that depend both on the specific interferometer implemented as well as the choice of bin, confirming the usefulness of the method to diagnose imperfections such as partial distinguishability or imperfect chip control. Finally, we analyze the behavior of Haar-averaged binned-mode distributions with partial distinguishability and demonstrate analytically that its variance is proportional to the average of the square of the photons' indistinguishability parameter. These findings highlight the central role of binning in boson sampling validation, offering a scalable and efficient framework for assessing multiphoton interference and experimental performance.
Gravitational lensing of gravitational waves (GWs) provides a unique opportunity to study cosmology and astrophysics at multiple scales. Detecting microlensing signatures, in particular, requires efficient parameter estimation methods due to the high computational cost of traditional Bayesian inference. In this paper we explore the use of deep learning, namely Conditional Variational Autoencoders (CVAE), to estimate parameters of microlensed binary black hole (simulated) waveforms. We find that our CVAE model yields accurate parameter estimation and significant computational savings compared to Bayesian methods such as Bilby (up to five orders of magnitude faster inferences). Moreover, the incorporation of CVAE-generated priors into Bilby, based on the 95% confidence intervals of the CVAE posterior for the lensing parameters, reduces Bilby's average runtime by around 48% without any penalty on accuracy. Our results suggest that a CVAE model is a promising tool for future low-latency searches of lensed signals. Further applications to actual signals and integration with advanced pipelines could help extend the capabilities of GW observatories in detecting microlensing events.
We provide a comprehensive analysis of the phenomenology of axion-like particles (ALPs) produced in core-collapse supernovae (ccSNe) through interactions with electrons and muons, both of which have a non-negligible abundance in the SN plasma. We identify and calculate six significant ALP-production channels, two of which are loop-level processes involving photons. We then examine several observational constraints on the ALP-electron and ALP-muon parameter spaces. Those include the bounds on anomalous cooling, energy deposition, decay into photons, diffuse gamma rays, and the 511 keV line. Our results provide updated and robust constraints on ALP couplings to electrons and muons from an improved treatment of production and absorption processes. Furthermore, we quantify the uncertainties of the results by using three state-of-the-art supernova models based on two independent simulation codes, finding that constraints vary by factors of O(2-10).
Kirkwood-Dirac representations of quantum states are increasingly finding use in many areas within quantum theory. Usually, representations of this sort are only applied to provide a representation of quantum states (as complex functions over some set). We show how standard Kirkwood-Dirac representations can be extended to a fully compositional representation of all of quantum theory (including channels, measurements and so on), and prove that this extension satisfies the essential features of functoriality (namely, that the representation commutes with composition of channels), linearity, and quasistochasticity. Interestingly, the representation of a POVM element is uniquely picked out to be the collection of weak values for it relative to the bases defining the representation. We then prove that if one can find any Kirkwood-Dirac representation that is everywhere real and nonnegative for a given experimental scenario or fragment of quantum theory, then the scenario or fragment is consistent with the principle of generalized noncontextuality, a key notion of classicality in quantum foundations. We also show that the converse does not hold: even if one verifies that all Kirkwood-Dirac representations (as defined herein) of an experiment require negativity or imaginarity, one cannot generally conclude that the experiment witnesses contextuality.
Gravitational-wave approximants are essential for gravitational-wave astronomy, allowing the coverage binary black hole parameter space for inference or match filtering without costly numerical relativity (NR) simulations, but generally trading some accuracy for computational efficiency. To reduce this trade-off, NR surrogate models can be constructed using interpolation within NR waveform space. We present a 2-stage training approach for neural network-based NR surrogate models. Initially trained on approximant-generated waveforms and then fine-tuned with NR data, these dual-stage artificial neural surrogate (\texttt{DANSur}) models offer rapid and competitively accurate waveform generation, generating millions in under 20ms on a GPU while keeping mean mismatches with NR around 10410^{-4}. Implemented in the \textsc{bilby} framework, we show they can be used for parameter estimation tasks.
III-V semiconductor nanolight sources with deep-subwavelength dimensions (<<1 μ{\mu}m) are essential for miniaturized photonic devices such as nanoLEDs and nanolasers. However, these nanoscale emitters suffer from substantial non-radiative recombination at room temperature, resulting in low efficiency and ultrashort lifetimes (<100 ps). Previous works have predominantly studied surface passivation of nanoLEDs under optical pumping conditions, while practical applications require electrically driven nanoLEDs. Here, we investigate the influence of surface passivation on the efficiency and high-speed modulation response of electrically pumped III-V GaAs/AlGaAs nanopillar array LEDs. Surface passivation was performed using ammonium sulphide chemical treatment followed by encapsulation with a 100 nm silicon nitride layer deposited via low-frequency plasma-enhanced chemical vapour deposition. Time-resolved electroluminescence measurements reveal differential carrier lifetimes (τ{\tau}) of ~0.61 ns for nanoarray LEDs with pillar diameters of ~440 nm, a record-long lifetime for electrically driven GaAs-based nanopillar arrays. Under low injection conditions, the devices exhibited carrier lifetimes of ~0.41 ns, indicating successful suppression of non-radiative effects and a low surface velocity, ranging from SS~0.7×\times104^4 cm/s to 2.7×\times104^4 cm/s. This reveals a potential high internal quantum efficiency IQEIQE~0.45 for our nanoLEDs operating under very high injection conditions, limited only by Auger recombination and self-heating effects at high current density. These miniaturized nanoLEDs with high radiative recombination efficiency and sub-ns modulation response pave the way for optical data communications, energy efficient optical interconnects, AR/VR displays, and neuromorphic computing applications.
Imperfect photons' indistinguishability limits the performance of photonic quantum communication and computation . Distillation protocols, inspired by entanglement purification, enhance photons' indistinguishability by leveraging quantum interference in linear optical circuits. In this work, we present a three-photon distillation protocol optimized to achieve the maximum visibility gain, which requires consideration of multi-photon effects such as collective photonic phases. We employ interferometers with the minimum number of modes, optimizing also over the protocol's success probability. The developed protocol is experimentally validated with a platform featuring a demultiplexed quantum dot source interfaced with a programmable eight-mode laser-written integrated photonic processor. We achieve indistinguishability distillation with limited photonic resources and for several multi-photon distinguishability scenarios. This work helps to strengthen the role of distillation as a practical tool for photon-based quantum technologies.
Codensity monads provide a universal method to generate complex monads from simple functors. Recently, a wide range of important monads in logic, denotational semantics, and probabilistic computation, such as several incarnations of the ultrafilter monad, the Vietoris monad, and the Giry monad, have been presented as codensity monads, using complex arguments. We propose a unifying categorical approach to codensity presentations of monads, based on the idea of relating the presenting functor to a dense functor via a suitable duality between categories. We prove a general presentation result applying to every such situation and demonstrate that most codensity presentations known in the literature emerge from this strikingly simple duality-based setup, drastically alleviating the complexity of their proofs and in many cases completely reducing them to standard duality results. Additionally, we derive a number of novel codensity presentations using our framework, including the first non-trivial codensity presentations for the filter monads on sets and topological spaces, the lower Vietoris monad on topological spaces, and the expectation monad on sets.
Constraining Beyond the Standard Model theories usually involves scanning highly multi-dimensional parameter spaces and check observable predictions against experimental bounds and theoretical constraints. Such task is often timely and computationally expensive, especially when the model is severely constrained and thus leading to very low random sampling efficiency. In this work we tackled this challenge using Artificial Intelligence and Machine Learning search algorithms used for Black-Box optimisation problems. Using the cMSSM and the pMSSM parameter spaces, we consider both the Higgs mass and the Dark Matter Relic Density constraints to study their sampling efficiency and parameter space coverage. We find our methodology to produce orders of magnitude improvement of sampling efficiency whilst reasonably covering the parameter space.
CRDTs are distributed data types that make eventual consistency of a distributed object possible and non ad-hoc. Specifically, state-based CRDTs ensure convergence through disseminating the en- tire state, that may be large, and merging it to other replicas; whereas operation-based CRDTs disseminate operations (i.e., small states) assuming an exactly-once reliable dissemination layer. We introduce Delta State Conflict-Free Replicated Datatypes ({\delta}-CRDT) that can achieve the best of both worlds: small messages with an incremental nature, as in operation-based CRDTs, disseminated over unreliable communication channels, as in traditional state-based CRDTs. This is achieved by defining {\delta}-mutators to return a delta-state, typically with a much smaller size than the full state, that is joined to both: local and remote states. We introduce the {\delta}-CRDT framework, and we explain it through establishing a correspondence to current state-based CRDTs. In addition, we present an anti-entropy algorithm that ensures causal consistency, and we introduce two {\delta}-CRDT specifications of well-known replicated datatypes.
In classical thermodynamics, heat must spontaneously flow from hot to cold systems. In quantum thermodynamics, the same law applies when considering multipartite product thermal states evolving unitarily. If initial correlations are present, anomalous heat flow can happen, temporarily making cold thermal states colder and hot thermal states hotter. Such effect can happen due to entanglement, but also because of classical randomness, hence lacking a direct connection with nonclassicality. In this work, we introduce scenarios where anomalous heat flow \emph{does} have a direct link to nonclassicality, defined to be the failure of noncontextual models to explain experimental data. We start by extending known noncontextuality inequalities to a setup where sequential transformations are considered. We then show a class of quantum prepare-transform-measure protocols, characterized by time intervals (0,τc)(0,\tau_c) for a given critical time τc\tau_c, where anomalous heat flow happens only if a noncontextuality inequality is violated. We also analyze a recent experiment from Micadei et. al. [Nat. Commun. 10, 2456 (2019)] and find the critical time τc\tau_c based on their experimental parameters. We conclude by investigating heat flow in the evolution of two qutrit systems, showing that our findings are not an artifact of using two-qubit systems.
There are no more papers matching your filters at the moment.