NICDESY-Zeuthen
·
The rapid advancement and deployment of AI systems have created an urgent need for standard safety-evaluation frameworks. This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability. Its development employed an open process that included participants from multiple fields. The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories, including violent crimes, nonviolent crimes, sex-related crimes, child sexual exploitation, indiscriminate weapons, suicide and self-harm, intellectual property, privacy, defamation, hate, sexual content, and specialized advice (election, financial, health, legal). Our method incorporates a complete assessment standard, extensive prompt datasets, a novel evaluation framework, a grading and reporting system, and the technical as well as organizational infrastructure for long-term support and evolution. In particular, the benchmark employs an understandable five-tier grading scale (Poor to Excellent) and incorporates an innovative entropy-based system-response evaluation. In addition to unveiling the benchmark, this report also identifies limitations of our method and of building safety benchmarks generally, including evaluator uncertainty and the constraints of single-turn interactions. This work represents a crucial step toward establishing global standards for AI risk and reliability evaluation while acknowledging the need for continued development in areas such as multiturn interactions, multimodal understanding, coverage of additional languages, and emerging hazard categories. Our findings provide valuable insights for model developers, system integrators, and policymakers working to promote safer AI deployment.
The gravitational capture of a stellar-mass compact object (CO) by a supermassive black hole is a unique probe of gravity in the strong field regime. Because of the large mass ratio, we call these sources extreme-mass ratio inspirals (EMRIs). In a similar manner, COs can be captured by intermediate-mass black holes in globular clusters or dwarf galaxies. The mass ratio in this case is lower, and hence we refer to the system as an intermediate-mass ratio inspiral (IMRI). Also, sub-stellar objects such as a brown dwarf, with masses much lighter than our Sun, can inspiral into supermassive black holes such as Sgr A* at our Galactic centre. In this case, the mass ratio is extremely large and, hence, we call this system ab extremely-large mass ratio inspirals (XMRIs). All of these sources of gravitational waves will provide us with a collection of snapshots of spacetime around a supermassive black hole that will allow us to do a direct mapping of warped spacetime around the supermassive black hole, a live cartography of gravity in this extreme gravity regime. E/I/XMRIs will be detected by the future space-borne observatories like LISA. There has not been any other probe conceived, planned or even thought of ever that can do the science that we can do with these inspirals. We will discuss them from a viewpoint of relativistic astrophysics.
One of the main targets of the Laser Interferometer Space Antenna (LISA) is the detection of extreme mass-ratio inspirals (EMRIs) and extremely large mass-ratio inspirals (X-MRIs). Their orbits are expected to be highly eccentric and relativistic when entering the LISA band. Under these circumstances, the inspiral time-scale given by Peters' formula loses precision and the shift of the last-stable orbit (LSO) caused by the massive black hole spin could influence the event rates estimate. We re-derive EMRIs and X-MRIs event rates by implementing two different versions of a Kerr loss-cone angle that includes the shift in the LSO, and a corrected version of Peters' time-scale that accounts for eccentricity evolution, 1.5 post-Newtonian hereditary fluxes, and spin-orbit coupling. The main findings of our study are summarized as follows: (1) implementing a Kerr loss-cone changes the event rates by a factor ranging between 0.9 and 1.1; (2) the high-eccentricity limit of Peters' formula offers a reliable inspiral time-scale for EMRIs and X-MRIs, resulting in an event rate estimate that deviates by a factor of about 0.9 to 3 when compared to event rates computed with the corrected version of Peters' time-scale and the usual loss-cone definition. (3) Event rates estimates for systems with a wide range of eccentricities should be revised. Peters' formula overestimates the inspiral rates of highly eccentric systems by a factor of about 8 to 30 compared to the corrected values. Besides, for e0_0 \lesssim0.8, implementing the corrected version of Peters' formula would be necessary to obtain accurate estimates.
Tensor network (TN) methods, in particular the Matrix Product States (MPS) ansatz, have proven to be a useful tool in analyzing the properties of lattice gauge theories. They allow for a very good precision, much better than standard Monte Carlo (MC) techniques for the models that have been studied so far, due to the possibility of reaching much smaller lattice spacings. The real reason for the interest in the TN approach, however, is its ability, shown so far in several condensed matter models, to deal with theories which exhibit the notorious sign problem in MC simulations. This makes it prospective for dealing with the non-zero chemical potential in QCD and other lattice gauge theories, as well as with real-time simulations. In this paper, using matrix product operators, we extend our analysis of the Schwinger model at zero temperature to show the feasibility of this approach also at finite temperature. This is an important step on the way to deal with the sign problem of QCD. We analyze in detail the chiral symmetry breaking in the massless and massive cases and show that the method works very well and gives good control over a broad range of temperatures, essentially from zero to infinite temperature.
The effect of repeatedly smearing SU(3) gauge configurations is investigated. Six gauge actions (Wilson, Symanzik, Iwasaki, DBW2, Beinlich-Karsch-Laermann, Langfeld; combined with a direct SU(3)-overrelaxation step) and three smearings (APE, HYP, EXP) are compared. The impact on large Wilson loops is monitored, confirming the signal-to-noise prediction by Lepage. The fat-link definition of the ``naive'' topological charge proves most useful on improved action ensembles.
We investigate the application of efficient recursive numerical integration strategies to models in lattice gauge theory from quantum field theory. Given the coupling structure of the physics problems and the group structure within lattice cubature rules for numerical integration, we show how to approach these problems efficiently by means of Fast Fourier Transform techniques. In particular, we consider applications to the quantum mechanical rotor and compact U(1) lattice gauge theory, where the physical dimensions are two and three. This proceedings article reviews our results presented in J. Comput. Phys 443 (2021) 110527.
Variational quantum eigensolvers (VQEs) combine classical optimization with efficient cost function evaluations on quantum computers. We propose a new approach to VQEs using the principles of measurement-based quantum computation. This strategy uses entagled resource states and local measurements. We present two measurement-based VQE schemes. The first introduces a new approach for constructing variational families. The second provides a translation of circuit-based to measurement-based schemes. Both schemes offer problem-specific advantages in terms of the required resources and coherence times.
In this paper we explore the scientific synergies between Athena and some of the key multi-messenger facilities that should be operative concurrently with Athena. These facilities include LIGO A+, Advanced Virgo+ and future detectors for ground-based observation of gravitational waves (GW), LISA for space-based observations of GW, IceCube and KM3NeT for neutrino observations, and CTA for very high energy observations. These science themes encompass pressing issues in astrophysics, cosmology and fundamental physics such as: the central engine and jet physics in compact binary mergers, accretion processes and jet physics in Super-Massive Binary Black Holes (SMBBHs) and in compact stellar binaries, the equation of state of neutron stars, cosmic accelerators and the origin of Cosmic Rays (CRs), the origin of intermediate and high-Z elements in the Universe, the Cosmic distance scale and tests of General Relativity and the Standard Model. Observational strategies for implementing the identified science topics are also discussed. A significant part of the sources targeted by multi-messenger facilities is of transient nature. We have thus also discussed the synergy of \textsl{Athena} with wide-field high-energy facilities, taking THESEUS as a case study for transient discovery. This discussion covers all the Athena science goals that rely on follow-up observations of high-energy transients identified by external observatories, and includes also topics that are not based on multi-messenger observations, such as the search for missing baryons or the observation of early star populations and metal enrichment at the cosmic dawn with Gamma-Ray Bursts (GRBs).
The subjet multiplicity has been measured in neutral current e+p interactions at Q**2 > 125 GeV**2 with the ZEUS detector at HERA using an integrated luminosity of 38.6 pb-1. Jets were identified in the laboratory frame using the longitudinally invariant K_T cluster algorithm. The number of jet-like substructures within jets, known as the subjet multiplicity, is defined as the number of clusters resolved in a jet by reapplying the jet algorithm at a smaller resolution scale y_cut. Measurements of the mean subjet multiplicity, < n_sbj >, for jets with transverse energies E_T,jet >15 GeV are presented. Next-to-leading-order perturbative QCD calculations describe the measurements well. The value of alpha_s(M_Z), determined from < n_sbj > at y_cut=10**-2 for jets with 25 < E_T,jet < 71 GeV, is alpha_s (M_Z) = 0.1187 +/- 0.0017 (stat.) +0.0024 / -0.0009 (syst.) +0.0093 / -0.0076 (th.).
More than 99% of the mass of the visible universe is made up of protons and neutrons. Both particles are much heavier than their quark and gluon constituents, and the Standard Model of particle physics should explain this difference. We present a full ab-initio calculation of the masses of protons, neutrons and other light hadrons, using lattice quantum chromodynamics. Pion masses down to 190 mega electronvolts are used to extrapolate to the physical point with lattice sizes of approximately four times the inverse pion mass. Three lattice spacings are used for a continuum extrapolation. Our results completely agree with experimental observations and represent a quantitative confirmation of this aspect of the Standard Model with fully controlled uncertainties.
We present a comprehensive investigation of light meson physics using maximally twisted mass fermions for two mass-degenerate quark flavours. By employing four values of the lattice spacing, spatial lattice extents ranging from 2.0 fm to 2.5 fm and pseudo scalar masses in the range 280 MeV to 650 MeV we control the major systematic effects of our calculation. This enables us to confront our data with chiral perturbation theory and extract low energy constants of the effective chiral Lagrangian and derived quantities, such as the light quark mass, with high precision.
In photo-sensor cameras of Cherenkov telescopes the light images from particle showers always contain the background noise induced by photons of the night sky. An image cleaning procedure is needed to reduce the contribution of those noise photons in further analysis stages. The conventional topological next neighbor method lacks reconstruction efficiency for low light content images and image peripheries with low signal levels. We present here a simple optimization of the traditional next-neighbor image cleaning method that exploits the limited time duration of shower flashes and short time-difference between neighboring image pixels. This method reduces greatly the noise contribution by applying dynamical cuts in the parameter space formed by signal amplitude and time-difference between neighboring pixels
The next generation instrument for ground-based gamma-ray astronomy will be the Cherenkov Telescope Array (CTA), consisting of approximately 100 telescopes in three sizes, built on two sites with one each in the Northern and Southern Hemi- spheres. Up to 40 of these will be Medium Size Telescopes (MSTs) which will dominate sensitivity in the core energy range. Since 2012, a full size mechanical prototype for the modified 12 m Davies-Cotton design MST has been in operation in Berlin. This doc- ument describes the techniques which have been implemented to calibrate and optimise the mechanical and optical performance of the prototype, and gives the results of over three years of observations and measurements. Pointing calibration techniques will be discussed, along with the development of a bending model, and calibration of the CCD cameras used for pointing measurements. Additionally alignment of mirror segments using the Bokeh method is shown.
The renormalization factor relating the bare to the renormalization group invariant quark masses is accurately calculated in quenched lattice QCD using a recursive finite-size technique. The result is presented in the form of a product of a universal factor times another factor, which depends on the details of the lattice theory but is easy to compute, since it does not involve any large scale differences. As a byproduct the Lambda-parameter of the theory is obtained with a total error of 8%.
We simulate the critical behavior of the Ising model utilizing a thermal state prepared using quantum computing techniques. The preparation of the thermal state is based on the variational quantum imaginary time evolution (QITE) algorithm. The initial state of QITE is prepared as a classical product state, and we propose a systematic method to design the variational ansatz for QITE. We calculate the specific heat and susceptibility of the long-range interacting Ising model and observe indications of the Ising criticality on a small lattice size. We find the results derived by the quantum algorithm are well consistent with the ones from exact diagonalization, both in the neighbourhood of the critical temperature and the low-temperature region.
The breakdown voltage of silicon sensors is known to be affected by the ambient humidity. To understand the sensor's humidity sensitivity, Synopsys TCAD was used to simulate n-in-p sensors for different effective relative humidities. Photon emission of hot electrons was imaged with a microscope to locate breakdown in the edge-region of the sensor. The Top-Transient Current Technique was used to measure charge transport near the surface in the breakdown region of the sensor. Using the measurements and simulations, the evolution of the electric field with relative humidity and the carrier densities towards breakdown in the periphery of p-bulk silicon sensors are investigated.
A significant challenge facing photometric surveys for cosmological purposes is the need to produce reliable redshift estimates. The estimation of photometric redshifts (photo-zs) has been consolidated as the standard strategy to bypass the high production costs and incompleteness of spectroscopic redshift samples. Training-based photo-z methods require the preparation of a high-quality list of spectroscopic redshifts, which needs to be constantly updated. The photo-z training, validation, and estimation must be performed in a consistent and reproducible way in order to accomplish the scientific requirements. To meet this purpose, we developed an integrated web-based data interface that not only provides the framework to carry out the above steps in a systematic way, enabling the ease testing and comparison of different algorithms, but also addresses the processing requirements by parallelizing the calculation in a transparent way for the user. This framework called the Science Portal (hereafter Portal) was developed in the context the Dark Energy Survey (DES) to facilitate scientific analysis. In this paper, we show how the Portal can provide a reliable environment to access vast data sets, provide validation algorithms and metrics, even in the case of multiple photo-zs methods. It is possible to maintain the provenance between the steps of a chain of workflows while ensuring reproducibility of the results. We illustrate how the Portal can be used to provide photo-z estimates using the DES first year (Y1A1) data. While the DES collaboration is still developing techniques to obtain more precise photo-zs, having a structured framework like the one presented here is critical for the systematic vetting of DES algorithmic improvements and the consistent production of photo-zs in the future DES releases.
Most gravitational wave (GW) sources are moving relative to us. This motion is often closely related to the environment of the source and can thus provide crucial information about the formation of the source and its host. Recently, LIGO and Virgo detected for the first time the subdominant modes of GWs. We show that a motion of the center-of-mass of the source can affect these modes, where the effect is proportional to the velocity of the source. The effect on the GW modes in turn affects the overall frequency of the GW, thus leading to a phase shift. We study the impact of this effect on LIGO/Virgo detections and show that it is detectable for sources with high mass ratios and inclinations. This effect breaks the degeneracy between mass and Doppler shift in GW observations, and opens a new possibility of detecting the motion of a GW source even for constant velocities.
19 Jun 2004
We present the high-energy neutrino Monte Carlo event generator ANIS (All Neutrino Interaction Simulation). The program provides a detailed and flexible neutrino event simulation for high-energy neutrino detectors, such as AMANDA, ANTARES or ICECUBE. It generates neutrinos of any flavor according to a specified flux and propagates them through the Earth. In a final step neutrino interactions are simulated within a specified volume. All relevant standard model processes are implemented. We discuss strengths and limitations of the program.
An investigation of the hadronic final state in diffractive and non--diffractive deep--inelastic electron--proton scattering at HERA is presented, where diffractive data are selected experimentally by demanding a large gap in pseudo --rapidity around the proton remnant direction. The transverse energy flow in the hadronic final state is evaluated using a set of estimators which quantify topological properties. Using available Monte Carlo QCD calculations, it is demonstrated that the final state in diffractive DIS exhibits the features expected if the interaction is interpreted as the scattering of an electron off a current quark with associated effects of perturbative QCD. A model in which deep--inelastic diffraction is taken to be the exchange of a pomeron with partonic structure is found to reproduce the measurements well. Models for deep--inelastic epep scattering, in which a sizeable diffractive contribution is present because of non--perturbative effects in the production of the hadronic final state, reproduce the general tendencies of the data but in all give a worse description.
There are no more papers matching your filters at the moment.