SOFIA Science CenterUSRA
We introduce TensorFlow Quantum (TFQ), an open source library for the rapid prototyping of hybrid quantum-classical models for classical or quantum data. This framework offers high-level abstractions for the design and training of both discriminative and generative quantum models under TensorFlow and supports high-performance quantum circuit simulators. We provide an overview of the software architecture and building blocks through several examples and review the theory of hybrid quantum-classical neural networks. We illustrate TFQ functionalities via several basic applications including supervised learning for quantum classification, quantum control, simulating noisy quantum circuits, and quantum approximate optimization. Moreover, we demonstrate how one can apply TFQ to tackle advanced quantum learning tasks including meta-learning, layerwise learning, Hamiltonian learning, sampling thermal states, variational quantum eigensolvers, classification of quantum phase transitions, generative adversarial networks, and reinforcement learning. We hope this framework provides the necessary tools for the quantum computing and machine learning research communities to explore models of both natural and artificial quantum systems, and ultimately discover new quantum algorithms which could potentially yield a quantum advantage.
The Gamow Explorer will use Gamma Ray Bursts (GRBs) to: 1) probe the high redshift universe (z > 6) when the first stars were born, galaxies formed and Hydrogen was reionized; and 2) enable multi-messenger astrophysics by rapidly identifying Electro-Magnetic (IR/Optical/X-ray) counterparts to Gravitational Wave (GW) events. GRBs have been detected out to z ~ 9 and their afterglows are a bright beacon lasting a few days that can be used to observe the spectral fingerprints of the host galaxy and intergalactic medium to map the period of reionization and early metal enrichment. Gamow Explorer is optimized to quickly identify high-z events to trigger follow-up observations with JWST and large ground-based telescopes. A wide field of view Lobster Eye X-ray Telescope (LEXT) will search for GRBs and locate them with arc-minute precision. When a GRB is detected, the rapidly slewing spacecraft will point the 5 photometric channel Photo-z Infra-Red Telescope (PIRT) to identify high redshift (z > 6) long GRBs within 100s and send an alert within 1000s of the GRB trigger. An L2 orbit provides > 95% observing efficiency with pointing optimized for follow up by the James Webb Space Telescope (JWST) and ground observatories. The predicted Gamow Explorer high-z rate is >10 times that of the Neil Gehrels Swift Observatory. The instrument and mission capabilities also enable rapid identification of short GRBs and their afterglows associated with GW events. The Gamow Explorer will be proposed to the 2021 NASA MIDEX call and if approved, launched in 2028.
Too often, quantum computer scientists seek to create new algorithms entirely fresh from new cloth when there are extensive and optimized classical algorithms that can be generalized wholesale. At the same time, one may seek to maintain classical advantages of performance and runtime bounds, while enabling potential quantum improvement. Hybrid quantum algorithms tap into this potential, and here we explore a class of hybrid quantum algorithms called Iterative Quantum Algorithms (IQA) that are closely related to classical greedy or local search algorithms, employing a structure where the quantum computer provides information that leads to a simplified problem for future iterations. Specifically, we extend these algorithms beyond past results that considered primarily quadratic problems to arbitrary k-local Hamiltonians, proposing a general framework that incorporates logical inference in a fundamental way. As an application we develop a hybrid quantum version of the well-known classical Davis-Putnam-Logemann-Loveland (DPLL) algorithm for satisfiability problems, which embeds IQAs within a complete backtracking based tree search framework. Our results also provide a general framework for handling problems with hard constraints in IQAs. We further show limiting cases of the algorithms where they reduce to classical algorithms, and provide evidence for regimes of quantum improvement.
In the rapidly expanding field of quantum computing, one key aspect to maintain ongoing progress is ensuring that early career scientists interested in the field get appropriate guidance and opportunity to advance their work, and in return that institutions and enterprises with a stake in quantum computing have access to a qualified pool of talent. Internship programs at the graduate level are the perfect vehicle to achieve this. In this paper, we review the trajectory of the USRA Feynman Quantum Academy Internship Program over the last 8 years, placing it in the context of the current push to prepare the quantum workforce of the future, and highlighting the caliber of the work it produced.
The quantum approximate optimization algorithm (QAOA) is a general-purpose algorithm for combinatorial optimization. In this paper, we analyze the performance of the QAOA on a statistical estimation problem, namely, the spiked tensor model, which exhibits a statistical-computational gap classically. We prove that the weak recovery threshold of 11-step QAOA matches that of 11-step tensor power iteration. Additional heuristic calculations suggest that the weak recovery threshold of pp-step QAOA matches that of pp-step tensor power iteration when pp is a fixed constant. This further implies that multi-step QAOA with tensor unfolding could achieve, but not surpass, the classical computation threshold Θ(n(q2)/4)\Theta(n^{(q-2)/4}) for spiked qq-tensors. Meanwhile, we characterize the asymptotic overlap distribution for pp-step QAOA, finding an intriguing sine-Gaussian law verified through simulations. For some pp and qq, the QAOA attains an overlap that is larger by a constant factor than the tensor power iteration overlap. Of independent interest, our proof techniques employ the Fourier transform to handle difficult combinatorial sums, a novel approach differing from prior QAOA analyses on spin-glass models without planted structure.
Rotation curves of galaxies probe their total mass distributions, including dark matter. Dwarf galaxies are excellent systems to investigate the dark matter density distribution, as they tend to have larger fractions of dark matter compared to higher mass systems. The core-cusp problem describes the discrepancy found in the slope of the dark matter density profile in the centres of galaxies (β\beta^*) between observations of dwarf galaxies (shallower cores) and dark matter-only simulations (steeper cusps). We investigate β\beta^* in six nearby spiral dwarf galaxies for which high-resolution CO J=10J=1-0 data were obtained with ALMA. We derive rotation curves and decompose the mass profile of the dark matter using our CO rotation curves as a tracer of the total potential and 4.5μ\mum photometry to define the stellar mass distribution. We find β=0.6\langle\beta^*\rangle = 0.6 with a standard deviation of ±0.1\pm0.1 among the galaxies in this sample, in agreement with previous measurements in this mass range. The galaxies studied are on the high stellar mass end of dwarf galaxies and have cuspier profiles than lower mass dwarfs, in agreement with other observations. When the same definition of the slope is used, we observe steeper slopes than predicted by the FIRE and NIHAO simulations. This may signal that these relatively massive dwarfs underwent stronger gas inflows toward their centres than predicted by these simulations, that these simulations over-predict the frequency of accretion or feedback events, or that a combination of these or other effects are at work.
Randomized benchmarking (RB) is a powerful method for determining the error rate of experimental quantum gates. Traditional RB, however, is restricted to gatesets, such as the Clifford group, that form a unitary 2-design. The recently introduced character RB can benchmark more general gates using techniques from representation theory; up to now, however, this method has only been applied to "multiplicity-free" groups, a mathematical restriction on these groups. In this paper, we extend the original character RB derivation to explicitly treat non-multiplicity-free groups, and derive several applications. First, we derive a rigorous version of the recently introduced subspace RB, which seeks to characterize a set of one- and two-qubit gates that are symmetric under SWAP. Second, we develop a new leakage RB protocol that applies to more general groups of gates. Finally, we derive a scalable RB protocol for the matchgate group, a group that like the Clifford group is non-universal but becomes universal with the addition of one additional gate. This example provides one of the few examples of a scalable non-Clifford RB protocol. In all three cases, compared to existing theories, our method requires similar resources, but either provides a more accurate estimate of gate fidelity, or applies to a more general group of gates. In conclusion, we discuss the potential, and challenges, of using non-multiplicity-free character RB to develop new classes of scalable RB protocols and methods of characterizing specific gates.
Although aviation accidents are rare, safety incidents occur more frequently and require a careful analysis to detect and mitigate risks in a timely manner. Analyzing safety incidents using operational data and producing event-based explanations is invaluable to airline companies as well as to governing organizations such as the Federal Aviation Administration (FAA) in the United States. However, this task is challenging because of the complexity involved in mining multi-dimensional heterogeneous time series data, the lack of time-step-wise annotation of events in a flight, and the lack of scalable tools to perform analysis over a large number of events. In this work, we propose a precursor mining algorithm that identifies events in the multidimensional time series that are correlated with the safety incident. Precursors are valuable to systems health and safety monitoring and in explaining and forecasting safety incidents. Current methods suffer from poor scalability to high dimensional time series data and are inefficient in capturing temporal behavior. We propose an approach by combining multiple-instance learning (MIL) and deep recurrent neural networks (DRNN) to take advantage of MIL's ability to learn using weakly supervised data and DRNN's ability to model temporal behavior. We describe the algorithm, the data, the intuition behind taking a MIL approach, and a comparative analysis of the proposed algorithm with baseline models. We also discuss the application to a real-world aviation safety problem using data from a commercial airline company and discuss the model's abilities and shortcomings, with some final remarks about possible deployment directions.
We review the prospects to build quantum processors based on superconducting transmons and radiofrequency cavities for testing applications in the NISQ era. We identify engineering opportunities and challenges for implementation of algorithms in simulation, combinatorial optimization, and quantum machine learning in qudit-based quantum computers.
We have entered a new era where integral-field spectroscopic surveys of galaxies are sufficiently large to adequately sample large-scale structure over a cosmologically significant volume. This was the primary design goal of the SAMI Galaxy Survey. Here, in Data Release 3 (DR3), we release data for the full sample of 3068 unique galaxies observed. This includes the SAMI cluster sample of 888 unique galaxies for the first time. For each galaxy, there are two primary spectral cubes covering the blue (370-570nm) and red (630-740nm) optical wavelength ranges at spectral resolving power of R=1808 and 4304 respectively. For each primary cube, we also provide three spatially binned spectral cubes and a set of standardized aperture spectra. For each galaxy, we include complete 2D maps from parameterized fitting to the emission-line and absorption-line spectral data. These maps provide information on the gas ionization and kinematics, stellar kinematics and populations, and more. All data are available online through Australian Astronomical Optics (AAO) Data Central.
The Gamow Explorer will use Gamma Ray Bursts (GRBs) to: 1) probe the high redshift universe (z > 6) when the first stars were born, galaxies formed and Hydrogen was reionized; and 2) enable multi-messenger astrophysics by rapidly identifying Electro-Magnetic (IR/Optical/X-ray) counterparts to Gravitational Wave (GW) events. GRBs have been detected out to z ~ 9 and their afterglows are a bright beacon lasting a few days that can be used to observe the spectral fingerprints of the host galaxy and intergalactic medium to map the period of reionization and early metal enrichment. Gamow Explorer is optimized to quickly identify high-z events to trigger follow-up observations with JWST and large ground-based telescopes. A wide field of view Lobster Eye X-ray Telescope (LEXT) will search for GRBs and locate them with arc-minute precision. When a GRB is detected, the rapidly slewing spacecraft will point the 5 photometric channel Photo-z Infra-Red Telescope (PIRT) to identify high redshift (z > 6) long GRBs within 100s and send an alert within 1000s of the GRB trigger. An L2 orbit provides > 95% observing efficiency with pointing optimized for follow up by the James Webb Space Telescope (JWST) and ground observatories. The predicted Gamow Explorer high-z rate is >10 times that of the Neil Gehrels Swift Observatory. The instrument and mission capabilities also enable rapid identification of short GRBs and their afterglows associated with GW events. The Gamow Explorer will be proposed to the 2021 NASA MIDEX call and if approved, launched in 2028.
We are now on a clear trajectory for improvements in exoplanet observations that will revolutionize our ability to characterize their atmospheric structure, composition, and circulation, from gas giants to rocky planets. However, exoplanet atmospheric models capable of interpreting the upcoming observations are often limited by insufficiencies in the laboratory and theoretical data that serve as critical inputs to atmospheric physical and chemical tools. Here we provide an up-to-date and condensed description of areas where laboratory and/or ab initio investigations could fill critical gaps in our ability to model exoplanet atmospheric opacities, clouds, and chemistry, building off a larger 2016 white paper, and endorsed by the NAS Exoplanet Science Strategy report. Now is the ideal time for progress in these areas, but this progress requires better access to, understanding of, and training in the production of spectroscopic data as well as a better insight into chemical reaction kinetics both thermal and radiation-induced at a broad range of temperatures. Given that most published efforts have emphasized relatively Earth-like conditions, we can expect significant and enlightening discoveries as emphasis moves to the exotic atmospheres of exoplanets.
The search for life in the universe is a major theme of astronomy and astrophysics for the next decade. Searches for technosignatures are complementary to searches for biosignatures, in that they offer an alternative path to discovery, and address the question of whether complex (i.e. technological) life exists elsewhere in the Galaxy. This approach has been endorsed in prior Decadal Reviews and National Academies reports, and yet the field still receives almost no federal support in the US. Because of this lack of support, searches for technosignatures, precisely the part of the search of greatest public interest, suffers from a very small pool of trained practitioners. A major source of this issue is institutional inertia at NASA, which avoids the topic as a result of decades-past political grandstanding, conflation of the effort with non-scientific topics such as UFOs, and confusion regarding the scope of the term "SETI." The Astro2020 Decadal should address this issue by making developing the field an explicit priority for the next decade. It should recommend that NASA and the NSF support training and curricular development in the field in a way that supports equity and diversity, and make explicit calls for proposals to fund searches for technosignatures.
We detect widespread [CII]157.7um emission from the inner 5 kpc of the active galaxy NGC 4258 with the SOFIA integral field spectrometer FIFI-LS. The emission is found associated with warm H2, distributed along and beyond the end of southern jet, in a zone known to contain shock-excited optical filaments. It is also associated with soft X-ray hot-spots, which are the counterparts of the `anomalous radio arms' of NGC~4258, and a 1 kpc-long filament on the minor axis of the galaxy which contains young star clusters. Palomar-CWI H-alpha integral field spectroscopy shows that the filament exhibits non-circular motions within NGC 4258. Many of the [CII] profiles are very broad, with the highest line width, 455 km/s, observed at the position of the southern jet bow-shock. Abnormally high ratios of L([CII])/L(FIR) and L([CII])/L(PAH7.7um) are found along and beyond the southern jet and in the X-ray hotspots. These are the same regions that exhibit unusually large intrinsic [CII] line widths. This suggests that the [CII] traces warm molecular gas in shocks and turbulence associated with the jet. We estimate that as much as 40% (3.8 x 10^39 erg/s) of the total [CII] luminosity from the inner 5 kpc of NGC 4258 arises in shocks and turbulence (< 1% bolometric luminosity from the active nucleus), the rest being consistent with [CII] excitation associated with star formation. We propose that the highly-inclined jet is colliding with, and being deflected around, dense irregularities in a thick disk, leading to significant energy dissipation over a wide area of the galaxy.
The Large UV/Optical/Infrared Surveyor (LUVOIR) mission is one of four Decadal Survey Mission Concepts studied by NASA in preparation for the US National Academies' Astro2020 Decadal Survey. This observatory has the major goal of characterizing a wide range of exoplanets, including those that might be habitable -- or even inhabited. It would simultaneously enable a great leap forward in a broad range of astrophysics -- from the epoch of reionization, through galaxy formation and evolution, to star and planet formation. Powerful remote sensing observations of Solar System bodies will also be possible. This Interim Report on the LUVOIR study presents the scientific motivations and goals of the mission concept, the preliminary and partial engineering design, and technology development information.
22 Oct 2008
We report the serendipitous discovery of an "Einstein Ring" in the optical band from the Sloan Digital Sky Survey (SDSS) data and associated four images of a background source. The lens galaxy appears to be a nearby dwarf spheroid at a redshift of 0.0375±\pm0.002. The lensed quasar is at a redshift of 0.6842±\pm0.0014 and its multiple images are distributed almost 360o^{o} around the lens nearly along a ring of radius \sim6."0. Single component lens models require a mass of the galaxy of almost 1012^{12} M_{\odot} within 6".0 from the lens center. With the available data we are unable to determine the exact positions, orientations and fluxes of the quasar and the galaxy, though there appears evidence for a double or multiple merging image of the quasar. We have also detected strong radio and X-ray emissions from this system. It is indicative that this ring system may be embedded in a group or cluster of galaxies. This unique ring, by virtue of the closeness of the lens galaxy, offers possible probe to some of the key issues like mass-to-light ratio of intrinsically faint galaxies, existence of large scale magnetic fields in elliptical galaxies etc.
The CONGEST and CONGEST-CLIQUE models have been carefully studied to represent situations where the communication bandwidth between processors in a network is severely limited. Messages of only O(log(n))O(log(n)) bits of information each may be sent between processors in each round. The quantum versions of these models allow the processors instead to communicate and compute with quantum bits under the same bandwidth limitations. This leads to the following natural research question: What problems can be solved more efficiently in these quantum models than in the classical ones? Building on existing work, we contribute to this question in two ways. Firstly, we present two algorithms in the Quantum CONGEST-CLIQUE model of distributed computation that succeed with high probability; one for producing an approximately optimal Steiner Tree, and one for producing an exact directed minimum spanning tree, each of which uses O~(n1/4)\tilde{O}(n^{1/4}) rounds of communication and O~(n9/4)\tilde{O}(n^{9/4}) messages, where nn is the number of nodes in the network. The algorithms thus achieve a lower asymptotic round and message complexity than any known algorithms in the classical CONGEST-CLIQUE model. At a high level, we achieve these results by combining classical algorithmic frameworks with quantum subroutines. An existing framework for using distributed version of Grover's search algorithm to accelerate triangle finding lies at the core of the asymptotic speedup. Secondly, we carefully characterize the constants and logarithmic factors involved in our algorithms as well as related algorithms, otherwise commonly obscured by O~\tilde{O} notation. The analysis shows that some improvements are needed to render both our and existing related quantum and classical algorithms practical, as their asymptotic speedups only help for very large values of nn.
We describe the simulation of dihedral gauge theories on digital quantum computers. The nonabelian discrete gauge group DND_N -- the dihedral group -- serves as an approximation to U(1)×Z2U(1)\times\mathbb{Z}_2 lattice gauge theory. In order to carry out such a lattice simulation, we detail the construction of efficient quantum circuits to realize basic primitives including the nonabelian Fourier transform over DND_N, the trace operation, and the group multiplication and inversion operations. For each case the required quantum resources scale linearly or as low-degree polynomials in n=logNn=\log N. We experimentally benchmark our gates on the Rigetti Aspen-9 quantum processor for the case of D4D_4. The fidelity of all D4D_4 gates was found to exceed 80%80\%.
21-cm HI4PI survey data are used to study the anomalous-velocity hydrogen gas associated with high-velocity cloud Complex M. These high-sensitivity, high-resolution, high-dynamic-range data show that many of the individual features, including MI, MIIa, and MIIb, are components of a long, arched filament that extends from about (l, b) = (105{\deg}, 53{\deg}) to (l, b) = (196{\deg}, 55{\deg}). Maps at different velocities, results from Gaussian analysis, and observations of associated high-energy emission make a compelling case that the MI cloud and the arched filament are physically interacting. If this is the case, we can use the distance to MI, 150 pc as reported by Schmelz & Verschuur (2022), to set the distance to Complex M. The estimated mass of Complex M is then about 120 solar masses and the energy implied using the observed line-of-sight velocity, -85 km/s, is 8.4 x 10^48 ergs. Integrating over 4{\pi} steradians, the total energy for a spherically symmetrical explosion is estimated to be 1.9 x 10^50 ergs, well within the energy budget of a typical supernova.
There are no more papers matching your filters at the moment.