Sydney Quantum Academy
We construct several explicit instances of quantum Tanner codes, a class of asymptotically good quantum low-density parity check (qLDPC) codes. The codes are constructed using dihedral groups and random pairs of classical codes and exhibit high encoding rates, relative distances, and pseudo-thresholds. Using the BP+OSD decoder, we demonstrate good performance in the phenomenological and circuit-level noise settings, comparable to the surface code with similar distances. Finally, we conduct an analysis of the space-time overhead incurred by these codes.
Graph Neural Networks (GNNs) are powerful machine learning models that excel at analyzing structured data represented as graphs, demonstrating remarkable performance in applications like social network analysis and recommendation systems. However, classical GNNs face scalability challenges when dealing with large-scale graphs. This paper proposes frameworks for implementing GNNs on quantum computers to potentially address the challenges. We devise quantum algorithms corresponding to the three fundamental types of classical GNNs: Graph Convolutional Networks, Graph Attention Networks, and Message-Passing GNNs. A complexity analysis of our quantum implementation of the Simplified Graph Convolutional (SGC) Network shows potential quantum advantages over its classical counterpart, with significant improvements in time and space complexities. Our complexities can have trade-offs between the two: when optimizing for minimal circuit depth, our quantum SGC achieves logarithmic time complexity in the input sizes (albeit at the cost of linear space complexity). When optimizing for minimal qubit usage, the quantum SGC exhibits space complexity logarithmic in the input sizes, offering an exponential reduction compared to classical SGCs, while still maintaining better time complexity. These results suggest our Quantum GNN frameworks could efficiently process large-scale graphs. This work paves the way for implementing more advanced Graph Neural Network models on quantum computers, opening new possibilities in quantum machine learning for analyzing graph-structured data.
Optical Schrödinger cat states are non-Gaussian states with applications in quantum technologies, such as for building error-correcting states in quantum computing. Yet the efficient generation of high-fidelity optical Schrödinger cat states is an outstanding problem in quantum optics. Here, we propose using squeezed superpositions of zero and two photons, θ=cos(θ/2)0+sin(θ/2)2|\theta\rangle = \cos{(\theta/2)}|0\rangle + \sin{(\theta/2)}|2\rangle, as ingredients for protocols to efficiently generate high-fidelity cat states. We present a protocol using linear optics with success probability P50%P\gtrsim 50\% that can generate cat states of size α2=5|\alpha|^2=5 with fidelity F>0.99F>0.99. The protocol relies only on detecting single photons and is remarkably tolerant of loss, with 2%2\% detection loss still achieving F>0.98F>0.98 for cats with α2=5|\alpha|^2=5. We also show that squeezed θ\theta states are ideal candidates for nonlinear photon subtraction using a two-level system with near deterministic success probability and fidelity F>0.98F>0.98 for cat states of size α2=5|\alpha|^2=5. Schemes for generating θ\theta states using quantum emitters are also presented. Our protocols can be implemented with current state-of-the-art quantum optics experiments.
In spherical symmetry, solutions of the semiclassical Einstein equations belong to one of two possible classes. Both classes contain solutions that -- depending on the dynamic behavior of the horizon -- describe evaporating physical black holes or expanding white holes (trapped/anti-trapped regions that form in finite time of a distant observer). These solutions are real-valued only if the null energy condition (NEC) is violated in the vicinity of the Schwarzschild sphere. We review their properties and describe the only consistent black hole formation scenario. While the curvature scalars are finite on the outer apparent/anti-trapping horizon, it is still a weakly singular surface. This singularity manifests itself in a mild firewall. Near the inner apparent horizon, the NEC is satisfied. Models of static regular black holes are known to be unstable, but since dynamic models of regular black holes are severely constrained by self-consistency requirements, their stability requires further investigation.
The information loss paradox is widely regarded as one of the biggest open problems in theoretical physics. Several classical and quantum features must be present to enable its formulation. First, an event horizon is needed to justify the objective status of tracing out degrees of freedom inside the black hole. Second, evaporation must be completed (or nearly completed) in finite time according to a distant observer, and thus the formation of the black hole should also occur in finite time. In spherical symmetry these requirements constrain the possible metrics strongly enough to obtain a unique black hole formation scenario and match their parameters with the semiclassical results. However, the two principal generalizations of surface gravity, the quantity that determines the Hawking temperature, do not agree with each other on the dynamical background. Neither can correspond to the emission of nearly-thermal radiation. We infer from this that the information loss problem cannot be consistently posed in its standard form.
We revisit the calculation of the Casimir effect from the perspective of scale limited resolutions of quantum fields. We use the continuous wavelet transform to introduce a scale degree of freedom and then restrict it to simulate either an observational or fundamental limitation of resolution. The Casimir force is derived in this setting for a free complex massless scalar field between two infinite plates with both Dirichlet and periodic boundary conditions. The dependence of the force on the choice of wavelet and size of scale cutoff is extensively discussed for several examples of wavelets.
Contextuality is a key characteristic that separates quantum from classical phenomena and an important tool in understanding the potential advantage of quantum computation. However, when assessing the quantum resources available for quantum information processing, there is no formalism to determine whether a set of states can exhibit contextuality and whether such proofs of contextuality indicate anything about the resourcefulness of that set. Introducing a well-motivated notion of what it means for a set of states to be contextual, we establish a relationship between contextuality and antidistinguishability of sets of states. We go beyond the traditional notions of contextuality and antidistinguishability and treat both properties as resources, demonstrating that the degree of contextuality within a set of states has a direct connection to its level of antidistinguishability. If a set of states is contextual, then it must be weakly antidistinguishable and vice-versa. However, maximal contextuality emerges as a stronger property than traditional antidistinguishability.
Digital quantum simulation (DQS) is one of the most promising paths for achieving first useful real-world applications for quantum processors. Yet even assuming rapid progress in device engineering and development of fault-tolerant quantum processors, algorithmic resource optimisation will long remain crucial to exploit their full power. Currently, Trotterisation provides state-of-the-art resource scaling. And recent theoretical studies of Trotterised Ising models suggest that even better performance than expected may be possible up to a distinct breakdown threshold in empirical performance. Here, we study multiple paradigmatic DQS models with experimentally realisable Trotterisations, and evidence the universality of a range of Trotterisation performance behaviours, including not only the threshold, but also new features in the pre-threshold regime that is most important for practical applications. In each model, we observe a distinct Trotterisation threshold shared across widely varying performance signatures; we further show that an onset of quantum chaotic dynamics causes the performance breakdown and is directly induced by digitisation errors. In the important pre-threshold regime, we are able to identify new distinct regimes displaying qualitatively different quasiperiodic performance behaviours, and show analytic behaviour for properly defined operational Trotter errors. Our results rely crucially on diverse new analytical tools, and provide a previously missing unified picture of Trotterisation behaviour across local observables, the global quantum state, and the full Trotterised unitary. This work provides new insights and tools for addressing important questions about the algorithm performance and underlying theoretical principles of sufficiently complex Trotterisation-based DQS, that will help in extracting maximum simulation power from future quantum processors.
Precision optical filters are key components for current and future photonic technologies. Here, we demonstrate a low loss spectral filter consisting of an ultrasteep bandpass feature with a maximum gradient of (90.6±\pm0.7) dB/GHz, centred within a notch filter with (128±\pm6) dB of suppression. The filter consists of a fiber Bragg grating with multiple π\pi-phase discontinuities inscribed into a single mode photosensitive fiber. The measured performance closely matches the simulated spectrum calculated from the design parameters indicating a high degree of confidence in the repeatability and manufacture of such devices. These filters show great promise for applications reliant on high-frequency resolution noise suppression, such as quantum networking, and highlight the opportunities for the versatility, efficiency, and extreme suppression offered by high-performance fiber Bragg grating devices.
We introduce a family of fidelities, termed generalized fidelity, which are based on the Riemannian geometry of the Bures-Wasserstein manifold. We show that this family of fidelities generalizes standard quantum fidelities such as Uhlmann-, Holevo-, and Matsumoto-fidelity and demonstrate that it satisfies analogous celebrated properties. The generalized fidelity naturally arises from a generalized Bures distance, the natural distance obtained by linearizing the Bures-Wasserstein manifold. We prove various invariance and covariance properties of generalized fidelity as the point of linearization moves along geodesic-related paths. We also provide a Block-matrix characterization and prove an Uhlmann-like theorem, as well as provide further extensions to the multivariate setting and to quantum R\'enyi divergences, generalizing Petz-, Sandwich-, Reverse sandwich-, and Geometric-R\'enyi divergences of order α\alpha.
For distant observers black holes are trapped spacetime domains bounded by apparent horizons. We review properties of the near-horizon geometry emphasizing the consequences of two common implicit assumptions of semiclassical physics. The first is a consequence of the cosmic censorship conjecture, namely that curvature scalars are finite at apparent horizons. The second is that horizons form in finite asymptotic time (i.e. according to distant observers), a property implicitly assumed in conventional descriptions of black hole formation and evaporation. Taking these as the only requirements within the semiclassical framework, we find that in spherical symmetry only two classes of dynamic solutions are admissible, both describing evaporating black holes and expanding white holes. We review their properties and present the implications. The null energy condition is violated in the vicinity of the outer horizon and satisfied in the vicinity of the inner apparent/anti-trapping horizon. Apparent and anti-trapping horizons are timelike surfaces of intermediately singular behavior, which manifests itself in negative energy density firewalls. These and other properties are also present in axially symmetric solutions. Different generalizations of surface gravity to dynamic spacetimes are discordant and do not match the semiclassical results. We conclude by discussing signatures of these models and implications for the identification of observed ultra-compact objects.
04 Sep 2025
We propose a general scheme to investigate photon triplet generation (PTG) via third-order spontaneous parametric downconversion (TOSPDC) in χ(3)\chi^{(3)} nonlinear structures. Our approach leverages the quantum-classical correspondence between TOSPDC and its reverse classical process, three-wave sum-frequency generation (TSFG), to efficiently estimate the PTG rate. We apply this framework to nonlinear metasurfaces supporting quasi-bound states in the continuum (qBICs) in the optical range. From numerical analysis of non-collinear TSFG with degenerate input waves at qBIC wavelengths, we predict wavelength-tunable three-photon emission with spatio-angular correlations. These findings establish a novel method for modelling TOSPDC and also highlight the potential of nonlinear resonant metasurfaces as compact free-space photon triplet sources with quantum state control.
We improve Gaussian Boson Sampling (GBS) circuits by integrating the unitary averaging (UA) protocol, previously demonstrated to protect unknown Gaussian states from phase errors [Phys. Rev. A 110, 032622]. Our work extends the applicability of UA to mitigate arbitrary interferometric noise, including beam-splitter and phase-shifter imperfections. Through comprehensive numerical analysis, we demonstrate that UA consistently achieves higher fidelity and success probability compared to unprotected circuits, establishing its robustness in noisy conditions. Remarkably, enhancement is maintained across varying numbers of modes with respect to the noise. We further derive a power-law formula predicting performance gains in large-scale systems, including 100-mode and 216-mode configurations. A detailed step-by-step algorithm for implementing the UA protocol is also provided, offering a practical roadmap for advancing near-term quantum technologies.
Understanding multi-photon interactions in non-equilibrium quantum systems is an outstanding challenge in quantum optics. In this work, we develop an analytical and diagrammatic framework to explore three-photon interactions in atomic ensembles weakly coupled to a one-dimensional waveguide. Taking advantage of the weak coupling, we use our diagrammatic framework to perform perturbation theory and calculate the leading-order contributions to the three-photon wavefunction, which would otherwise be intractable. We then compute the outgoing photon wavefunction of a resonantly driven atomic ensemble, with photon-photon interactions truncated up to three photons. Our formulation not only captures the individual transmission of photons but also isolates the connected S-matrix elements that embody genuine photon-photon correlations. Through detailed analysis, we obtain the analytic expressions of the connected third-order correlation function and the third-order electric-field-quadrature cumulant, which reveal non-Gaussian signatures emerging from the interplay of two- and three-photon processes. We also calculate the optical depth where non-Gaussian photon states can be observed. Numerical simulations based on a cascaded master equation validate our analytical predictions on a small-scale system. These results provide a formalism to further explore non-equilibrium quantum optics in atomic ensembles and extend this to the regime of non-Gaussian photon transport.
We propose and experimentally demonstrate a novel detection method that significantly improves the precision of real-time measurement of the three-dimensional displacement of a levitated dipolar scatterer. Our technique relies on the spatial mode decomposition of the light scattered by the levitated object, allowing us to simultaneously and selectively extract the position information of all translational degrees of freedom with minimal losses. To this end, we collect all the light back-scattered from a levitated nanoparticle using a parabolic mirror and couple it into a spatial mode sorter. The sorter effectively demultiplexes the information content of the scattered electric field, resulting in each of the nanoparticle's translational degrees of freedom being selectively encoded in the amplitude of orthogonal optical modes. We report measurement efficiencies of (ηtotx,ηtoty,ηtotz)=(0.14,0.16,0.32){(\eta_{^{\mathrm{tot}}}^{_{x}}, \eta_{^{\mathrm{tot}}}^{_{y}}, \eta_{^{\mathrm{tot}}}^{_{z}}) = (0.14, 0.16, 0.32)} >> 1/9, which should enable the 3D motional quantum ground state of a levitated optomechanical system. Further, we believe this technique opens up the possibility to implement coherent feedback control of a levitated nanoparticle.
A significant hurdle for quantum information and processing using bosonic systems is stochastic phase errors which occur as the photons propagate through a channel. These errors will reduce the purity of states passing through the channel and so reducing the channels capacity. We present a scheme of passive linear optical unitary averaging for protecting unknown Gaussian states transmitted through an optical channel. The scheme reduces the effect of phase noise on purity, squeezing and entanglement, thereby enhancing the channel via probabilistic error correcting protocol. The scheme is robust to loss and typically succeeds with high probability. We provide both numerical simulations and analytical approximations tailored for relevant parameters with the improvement of practical and current technology. We also show the asymptotic nature of the protocol, highlighting both current and future relevance.
Fidelity is arguably the most popular figure of merit in quantum sciences. However, many of its properties are still unknown. In this work, we resolve the open problem of maximizing average fidelity over arbitrary finite ensembles of quantum states and derive new upper bounds. We first construct a semidefinite program whose optimal value is the maximum average fidelity and then derive fixed-point algorithms that converge to the optimal state. The fixed-point algorithms outperform the semidefinite program in terms of numerical runtime. We also derive expressions for near-optimal states that are easier to compute and upper and lower bounds for maximum average fidelity that are exact when all the states in the ensemble commute. Finally, we discuss how our results solve some open problems in Bayesian quantum tomography.
1
Black holes play a pivotal role in the foundations of physics, but there is an alarming discrepancy between what is considered to be a black hole in observational astronomy and theoretical studies. Despite claims to the contrary, we argue that identifying the observed astrophysical black hole candidates as genuine black holes is not justified based on the currently available observational data, and elaborate on the necessary evidence required to support such a remarkable claim. In addition, we investigate whether the predictions of semiclassical gravity are equally compatible with competing theoretical models, and find that semiclassical arguments favor horizonless configurations.
We develop a scattering theory formalism and use it to predict that a resonantly driven atomic ensemble weakly coupled to an optical mode can generate light with non-Gaussian correlations. Our approach -- based on a perturbative diagrammatic expansion of multi-photon interactions -- shows that photon-photon interaction mediated by the emitters causes the transmitted light to have a non-vanishing connected third-order correlation function gc(3)g_c^{(3)}. We explain the temporal pattern of gc(3)g_c^{(3)} using the interaction processes in our diagrammatic expansion. A quantitative comparison with cascaded master equation simulations for small ensembles with optical depth OD2\mathrm{OD}\leq 2 confirms that the perturbative results remain accurate across experimentally relevant optical depths and for drive strengths large enough to make the predicted non-Gaussian signatures detectable. We anticipate that state-of-the-art nanofibre-coupled atomic ensembles can experimentally demonstrate our predictions.
The choice between the Schroedinger and Heisenberg pictures can significantly impact the computational resources needed to solve a problem, even though they are equivalent formulations of quantum mechanics. Here we present a method for analysing Bosonic quantum circuits based on the Heisenberg picture that allows, under certain conditions, a useful factoring of the evolution into signal and noise contributions, in a similar way as can be done with classical communication systems. We provide examples which suggest this approach may be particular useful in analysing quantum computing systems based on the Gottesman-Kitaev-Preskill (GKP) qubits.
There are no more papers matching your filters at the moment.