Universität Erlangen-Nürnberg
The Gemini-Monoceros X-ray enhancement is a rich field for studying diffuse X-ray emission and supernova remnants (SNRs). With the launch of eROSITA onboard the SRG platform in 2019, we are now able to fully study these sources. Many of the SNRs in the vicinity are suspected to be very old remnants, which are severely understudied in X-rays due to numerous observational challenges. In addition, identification of new faint large SNRs might help to solve the long-standing discrepancy of observed and expected number of Galactic SNRs. We performed a detailed X-ray spectral analysis of the entire diffuse structure and a detailed background analysis of the vicinity. We also made use of multi-wavelength data to better understand the morphology and to constrain the distances to the different sources. We estimated the plasma properties of the sources and calculated a grid of model SNRs to determine the individual SNR properties. Most of the diffuse plasma of the Monogem Ring SNR is well described by a single non-equilibrium ionization (NEI) component with an average temperature of kT=0.14±0.03kT = 0.14\pm 0.03 keV. We obtain an age of 1.2105\approx 1.2\cdot 10^5 yr - consistent with PSR B0656+14 - for the Monogem Ring. In the south-east, we found evidence for a hotter second plasma component and a possible new SNR candidate at 300\approx 300 pc, with the new candidate having an age of $\approx 50,000$ yr. We were also able to improve on previous studies on the more distant Monoceros Loop and PKS 0646+06 SNRs. We obtained significantly higher temperatures than previous studies, and for PKS 0646+06 a much lower estimated age of the SNR. We also found a new SNR candidate G190.4+12.5 which most likely is located at D>1.5D > 1.5 kpc, expanding into a low density medium at a high distance from the Galactic plane, with an estimated age of 40,00060,00040,000-60,000 yr.
The IceCube neutrino telescope at the South Pole has measured the atmospheric muon neutrino spectrum as a function of zenith angle and energy in the approximate 320 GeV to 20 TeV range, to search for the oscillation signatures of light sterile neutrinos. No evidence for anomalous νμ\nu_\mu or νˉμ\bar{\nu}_\mu disappearance is observed in either of two independently developed analyses, each using one year of atmospheric neutrino data. New exclusion limits are placed on the parameter space of the 3+1 model, in which muon antineutrinos would experience a strong MSW-resonant oscillation. The exclusion limits extend to sin22θ24\mathrm{sin}^2 2\theta_{24} \leq 0.02 at $\Delta m^2 \sim0.3 0.3 \mathrm{eV}^2$ at the 90\% confidence level. The allowed region from global analysis of appearance experiments, including LSND and MiniBooNE, is excluded at approximately the 99\% confidence level for the global best fit value of |Ue42_{e4}|^2.
We study the ground-state phase diagram of the spin-1/21/2 Kitaev-Heisenberg model on the bilayer honeycomb lattice with large-scale tensor network calculations based on the infinite projected entangled pair state technique as well as high-order series expansions. We find that beyond various magnetically ordered phases, including ferromagnetic, zigzag, antiferromagnetic (AFM) and stripy states, two extended quantum spin liquid phases arise in the proximity of the Kitaev limit. While these ordered phases also appear in the monolayer Kitaev-Heisenberg model, our results further show that a valence bond solid state emerges in a relatively narrow range of parameter space between the AFM and stripy phases, which can be adiabatically connected to isolated Heisenberg dimers. Our results highlight the importance of considering interlayer interactions on the emergence of novel quantum phases in the bilayer Kitaev materials.
A bounded operator TT on a separable, complex Hilbert space is said to be odd symmetric if ITtI=TI^*T^tI=T where II is a real unitary satisfying I2=1I^2=-1 and TtT^t denotes the transpose of TT. It is proved that such an operator can always be factorized as T=IAtIAT=I^*A^tIA with some operator AA. This generalizes a result of Hua and Siegel for matrices. As application it is proved that the set of odd symmetric Fredholm operators has two connected components labelled by a Z2Z_2-index given by the parity of the dimension of the kernel of TT. This recovers a result of Atiyah and Singer. Two examples of Z2Z_2-valued index theorems are provided, one being a version of the Noether-Gohberg-Krein theorem with symmetries and the other an application to topological insulators.
Coronary CT angiography (CCTA) has established its role as a non-invasive modality for the diagnosis of coronary artery disease (CAD). The CAD-Reporting and Data System (CAD-RADS) has been developed to standardize communication and aid in decision making based on CCTA findings. The CAD-RADS score is determined by manual assessment of all coronary vessels and the grading of lesions within the coronary artery tree. We propose a bottom-up approach for fully-automated prediction of this score using deep-learning operating on a segment-wise representation of the coronary arteries. The method relies solely on a prior fully-automated centerline extraction and segment labeling and predicts the segment-wise stenosis degree and the overall calcification grade as auxiliary tasks in a multi-task learning setup. We evaluate our approach on a data collection consisting of 2,867 patients. On the task of identifying patients with a CAD-RADS score indicating the need for further invasive investigation our approach reaches an area under curve (AUC) of 0.923 and an AUC of 0.914 for determining whether the patient suffers from CAD. This level of performance enables our approach to be used in a fully-automated screening setup or to assist diagnostic CCTA reading, especially due to its neural architecture design -- which allows comprehensive predictions.
We present a general, systematic, and efficient method for decomposing any given exponential operator of bosonic mode operators, describing an arbitrary multi-mode Hamiltonian evolution, into a set of universal unitary gates. Although our approach is mainly oriented towards continuous-variable quantum computation, it may be used more generally whenever quantum states are to be transformed deterministically, e.g. in quantum control, discrete-variable quantum computation, or Hamiltonian simulation. We illustrate our scheme by presenting decompositions for various nonlinear Hamiltonians including quartic Kerr interactions. Finally, we conclude with two potential experiments utilizing offline-prepared optical cubic states and homodyne detections, in which quantum information is processed optically or in an atomic memory using quadratic light-atom interactions.
We study a variant of the dynamical optimal transport problem in which the energy to be minimised is modulated by the covariance matrix of the distribution. Such transport metrics arise naturally in mean-field limits of certain ensemble Kalman methods for solving inverse problems. We show that the transport problem splits into two coupled minimization problems: one for the evolution of mean and covariance of the interpolating curve and one for its shape. The latter consists in minimising the usual Wasserstein length under the constraint of maintaining fixed mean and covariance along the interpolation. We analyse the geometry induced by this modulated transport distance on the space of probabilities as well as the dynamics of the associated gradient flows. Those show better convergence properties in comparison to the classical Wasserstein metric in terms of exponential convergence rates independent of the Gaussian target. On the level of the gradient flows a similar splitting into the evolution of moments and shapes of the distribution can be observed.
We present the publicly available model \textsc{reltrans} that calculates the light-crossing delays and energy shifts experienced by X-ray photons originally emitted close to the black hole when they reflect from the accretion disk and are scattered into our line-of-sight, accounting for all general relativistic effects. Our model is fast and flexible enough to be simultaneously fit to the observed energy-dependent cross-spectrum for a large range of Fourier frequencies, as well as to the time-averaged spectrum. This not only enables better geometric constraints than only modelling the relativistically broadened reflection features in the time-averaged spectrum, but additionally enables constraints on the mass of supermassive black holes in active galactic nuclei and stellar-mass black holes in X-ray binaries. We include a self-consistently calculated radial profile of the disk ionization parameter and properly account for the effect that the telescope response has on the predicted time lags. We find that a number of previous spectral analyses have measured artificially low source heights due to not accounting for the former effect and that timing analyses have been affected by the latter. In particular, the magnitude of the soft lags in active galactic nuclei may have been under-estimated, and the magnitude of lags attributed to thermal reverberation in X-ray binaries may have been over-estimated. We fit \textsc{reltrans} to the lag-energy spectrum of the Seyfert galaxy Mrk 335, resulting in a best fitting black hole mass that is smaller than previous optical reverberation measurements (7\sim 7 million compared with 1426\sim14-26 million MM_\odot).
X-ray reflection spectroscopy is a promising technique for testing general relativity in the strong field regime, as it can be used to test the Kerr black hole hypothesis. In this context, the parametrically deformed black hole metrics proposed by Konoplya, Rezzolla \& Zhidenko (Phys. Rev. D93, 064015, 2016) form an important class of non-Kerr black holes. We implement this class of black hole metrics in \textsc{relxill\_nk}, which is a framework we have developed for testing for non-Kerr black holes using X-ray reflection spectroscopy. We perform a qualitative analysis of the effect of the leading order strong-field deformation parameters on typical observables like the innermost stable circular orbits and the reflection spectra. We also present the first X-ray constraints on some of the deformation parameters of this metric, using \textit{Suzaku} data from the supermassive black hole in Ark~564, and compare them with those obtained (or expected) from other observational techniques like gravitational waves and black hole imaging.
We give an overview of the SImulation of X-ray TElescopes (SIXTE) software package, a generic, mission-independent Monte Carlo simulation toolkit for X-ray astronomical instrumentation. The package is based on a modular approach for the source definition, the description of the optics, and the detector type such that new missions can be easily implemented. The targets to be simulated are stored in a flexible input format called SIMPUT. Based on this source definition, a sample of photons is produced and then propagated through the optics. In order to model the detection process, the software toolkit contains modules for various detector types, ranging from proportional counter and Si-based detectors, to more complex descriptions like transition edge sensor (TES) devices. The implementation of characteristic detector effects and a detailed modeling of the read-out process allow for representative simulations and therefore enable the analysis of characteristic features, such as for example pile-up, and their impact on observations. We present an overview of the implementation of SIXTE from the input source, the imaging, and the detection process, highlighting the modular approach taken by the SIXTE software package. In order to demonstrate the capabilities of the simulation software, we present a selection of representative applications, including the all-sky survey of eROSITA and a study of pile-up effects comparing the currently operating XMM-Newton with the planned Athena-WFI instrument. A simulation of a galaxy cluster with the Athena- X-IFU shows the capability of SIXTE to predict the expected performance of an observation for a complex source with a spatially varying spectrum and our current knowledge of the future instrument.
Recently the problem of Unambiguous State Discrimination (USD) of mixed quantum states has attracted much attention. So far, bounds on the optimum success probability have been derived [1]. For two mixed states they are given in terms of the fidelity. Here we give tighter bounds as well as necessary and sufficient conditions for two mixed states to reach these bounds. Moreover we construct the corresponding optimal measurement strategies. With this result, we provide analytical solutions for unambiguous discrimination of a class of generic mixed states. This goes beyond known results which are all reducible to some pure state case. Additionally, we show that examples exist where the bounds cannot be reached.
The collective behavior of ensembles of atoms has been studied in-depth since the seminal paper of Dicke [R. H. Dicke, Phys. Rev. 93, 99 (1954)], where he demonstrated that a group of emitters in collective states is able to radiate with increased intensity and modified decay rates in particular directions, a phenomenon which he called superradiance. Here, we show that the fundamental setup of two atoms coupled to a single-mode cavity can be distinctly exceeding the free-space superradiant behavior, a phenomenon which we call hyperradiance. The effect is accompanied by strong quantum fluctuations and surprisingly arises for atoms radiating out-of-phase, an alleged non-ideal condition, where one expects subradiance. We are able to explain the onset of hyperradiance in a transparent way by a photon cascade taking place among manifolds of Dicke states with different photon numbers under particular out-of-phase coupling conditions. The theoretical results can be realized with current technology and thus should stimulate future experiments.
We study the \textit{entanglement contour} and \textit{partial entanglement entropy} (PEE) in quantum field theories in 3 and higher dimensions. The entanglement entropy is evaluated from a certain limit of the PEE with a geometric regulator. In the context of the \textit{entanglement contour}, we classify the geometric regulators, study their difference from the UV regulators. Furthermore, for spherical regions in conformal field theories (CFTs) we find the exact relation between the UV and geometric cutoff, which clarifies some subtle points in the previous literature. We clarify a subtle point of the additive linear combination (ALC) proposal for PEE in higher dimensions. The subset entanglement entropies in the \textit{ALC proposal} should all be evaluated as a limit of the PEE while excluding a fixed class of local-short-distance correlation. Unlike the 2-dimensional configurations, naively plugging the entanglement entropy calculated with a UV cutoff will spoil the validity of the \textit{ALC proposal}. We derive the \textit{entanglement contour} function for spherical regions, annuli and spherical shells in the vacuum state of general-dimensional CFTs on a hyperplane.
During the last few years, the phase diagram of the large N Gross-Neveu model in 1+1 dimensions at finite temperature and chemical potential has undergone a major revision. Here we present a streamlined account of this development, collecting the most important results. Quasi-one-dimensional condensed matter systems like conducting polymers provide real physical systems which can be approximately described by the Gross-Neveu model and have played some role in establishing its phase structure. The kink-antikink phase found at low temperatures is closely related to inhomogeneous superconductors in the Larkin-Ovchinnikov-Fulde-Ferrell phase. With the complete phase diagram at hand, the Gross-Neveu model can now serve as a firm testing ground for new algorithms and theoretical ideas.
We report nonclassical aspects of the collective behaviour of two atoms in a cavity by investigating the photon statistics and photon distribution in a very broad domain of parameters. Starting with the dynamics of two atoms radiating in phase into the cavity, we study the photon statistics for arbitrary interatomic phases as revealed by the second-order intensity correlation function at zero time g(2)(0)g^{(2)}(0) and the Mandel QQ parameter. We find that the light field can be tuned from antibunched to (super-)bunched as well as nonclassical to classical behaviour by merely modifying the atomic position. The highest nonclassicality in the sense of the smallest QQ parameter is found when spontaneous emission, cavity decay, coherent pumping, and atom-cavity coupling are of comparable magnitude. We introduce a quantum version of the negative binomial distribution with its parameters directly related to QQ and g(2)(0)g^{(2)}(0) and discuss its range of applicability. We also examine the Klyshko parameter which highlights the nonclassicality of the photon distribution.
A systematic connection between QCD and nuclear few- and many-body properties in the form of the Effective Field Theory "without pions" is applied to $A\le 6$ nuclei to determine its range of applicability. We present results at next-to-leading order for the Tjon correlation and for a correlation between the singlet S-wave 3^3He-neutron scattering length and the triton binding energy. In the A=6 sector we performed leading order calculations for the binding energy and the charge and matter radii of the halo nucleus 6^6He. Also at leading order, the doublet S-wave 4-He-neutron phase shifts are compared with R-matrix data. These analysis provide evidence for a sufficiently fast convergence of the effective field theory, in particular, our results in $A\le 4$ predict an expansion parameter of about 1/3, and they converge to data within the predicted uncertainty band at this order. A properly adjusted three-body contact force which we include together with the Coulomb interaction in all calculations is found to correctly renormalize the pion-less theory at leading- and next-to-leading order, i.e. the power counting does not require four-body forces at the respective order.
SU(2) flat connection on 2D Riemann surface is shown to relate to the generalized twisted geometry in 3D space with cosmological constant. Various flat connection quantities on Riemann surface are mapped to the geometrical quantities in discrete 3D space. We propose that the moduli space of SU(2) flat connections on Riemann surface generalizes the phase space of twisted geometry or Loop Quantum Gravity to include the cosmological constant.
·
The joint JAXA/NASA ASTRO-H mission is the sixth in a series of highly successful X-ray missions developed by the Institute of Space and Astronautical Science (ISAS), with a planned launch in 2015. The ASTRO-H mission is equipped with a suite of sensitive instruments with the highest energy resolution ever achieved at E > 3 keV and a wide energy range spanning four decades in energy from soft X-rays to gamma-rays. The simultaneous broad band pass, coupled with the high spectral resolution of Delta E < 7 eV of the micro-calorimeter, will enable a wide variety of important science themes to be pursued. ASTRO-H is expected to provide breakthrough results in scientific areas as diverse as the large-scale structure of the Universe and its evolution, the behavior of matter in the gravitational strong field regime, the physical conditions in sites of cosmic-ray acceleration, and the distribution of dark matter in galaxy clusters at different redshifts.
In this thesis, we study the downlink multiuser scheduling and power allocation problem for systems with simultaneous wireless information and power transfer (SWIPT). In the first part of the thesis, we focus on multiuser scheduling. We design optimal scheduling algorithms that maximize the long-term average system throughput under different fairness requirements, such as proportional fairness and equal throughput fairness. In particular, the algorithm designs are formulated as non-convex optimization problems which take into account the minimum required average sum harvested energy in the system. The problems are solved by using convex optimization techniques and the proposed optimization framework reveals the tradeoff between the long-term average system throughput and the sum harvested energy in multiuser systems with fairness constraints. Simulation results demonstrate that substantial performance gains can be achieved by the proposed optimization framework compared to existing suboptimal scheduling algorithms from the literature. In the second part of the thesis, we investigate the joint user scheduling and power allocation algorithm design for SWIPT systems. The algorithm design is formulated as a non-convex optimization problem which maximizes the achievable rate subject to a minimum required average power transfer. Subsequently, the non-convex optimization problem is reformulated by big-M method which can be solved optimally. Furthermore, we show that joint power allocation and user scheduling is an efficient way to enlarge the feasible trade-off region for improving the system performance in terms of achievable data rate and harvested energy.
In the classical theory of electromagnetism, the permittivity and the permeability of free space are constants whose magnitudes do not seem to possess any deeper physical meaning. By replacing the free space of classical physics with the quantum notion of the vacuum, we speculate that the values of the aforementioned constants could arise from the polarization and magnetization of virtual pairs in vacuum. A classical dispersion model with parameters determined by quantum and particle physics is employed to estimate their values. We find the correct orders of magnitude. Additionally, our simple assumptions yield an independent estimate for the number of charged elementary particles based on the known values of the permittivity and the permeability, and for the volume of a virtual pair. Such interpretation would provide an intriguing connection between the celebrated theory of classical electromagnetism and the quantum theory in the weak field limit.
There are no more papers matching your filters at the moment.