Nuclear Research Center Negev
We present results from the Jefferson Lab E08-014 experiment, investigating short-range correlations (SRC) through measurements of absolute inclusive quasi-elastic cross sections and their ratios. This study utilized 3.356 GeV electrons scattered off targets including 2^2H, 3^3He, 4^4He, 12^{12}C, 40^{40}Ca, and 48^{48}Ca, at modest momentum transfers (1.3 < Q^2 \leq 2 GeV2^2). Kinematics were selected to enhance the cross-section contribution from high-momentum nucleons originating from the strongly interacting, short-distance components of two-nucleon SRCs (2N-SRCs), known to exhibit a universal structure across both light and heavy nuclei.We analyzed the A/2^2H ratio within the region dominated by 2N-SRCs to characterize the nuclear dependence of SRC contributions across various nuclei. Additionally, the A/3^3He ratio was examined at kinematics sensitive to nucleons with even higher momentum, aiming to identify signals indicative of three-nucleon SRCs (3N-SRCs). The traditional analysis method in the expected 3N-SRC region ($x > 2$) did not yield a clear plateau; instead, the data diverged from the predicted 3N-SRC behavior as momentum transfer increased. However, when analyzed in terms of the struck nucleon's light-cone momentum, the data exhibited the opposite trend, progressively approaching the predicted 3N-SRC plateau. These observations suggest that future measurements at higher energies may facilitate a definitive isolation and identification of 3N-SRCs.
Next-generation neutrinoless double beta decay experiments aim for half-life sensitivities of ~102710^{27} yr, requiring suppressing backgrounds to <1 count/tonne/yr. For this, any extra background rejection handle, beyond excellent energy resolution and the use of extremely radiopure materials, is of utmost importance. The NEXT experiment exploits differences in the spatial ionization patterns of double beta decay and single-electron events to discriminate signal from background. While the former display two Bragg peak dense ionization regions at the opposite ends of the track, the latter typically have only one such feature. Thus, comparing the energies at the track extremes provides an additional rejection tool. The unique combination of the topology-based background discrimination and excellent energy resolution (1% FWHM at the Q-value of the decay) is the distinguishing feature of NEXT. Previous studies demonstrated a topological background rejection factor of ~5 when reconstructing electron-positron pairs in the 208^{208}Tl 1.6 MeV double escape peak (with Compton events as background), recorded in the NEXT-White demonstrator at the Laboratorio Subterráneo de Canfranc, with 72% signal efficiency. This was recently improved through the use of a deep convolutional neural network to yield a background rejection factor of ~10 with 65% signal efficiency. Here, we present a new reconstruction method, based on the Richardson-Lucy deconvolution algorithm, which allows reversing the blurring induced by electron diffusion and electroluminescence light production in the NEXT TPC. The new method yields highly refined 3D images of reconstructed events, and, as a result, significantly improves the topological background discrimination. When applied to real-data 1.6 MeV ee+e^-e^+ pairs, it leads to a background rejection factor of 27 at 57% signal efficiency.
Robotic hands are an important tool for replacing humans in handling toxic or radioactive materials. However, these are usually highly expensive, and in many cases, once they are contaminated, they cannot be re-used. Some solutions cope with this challenge by 3D printing parts of a tendon-based hand. However, fabrication requires additional assembly steps. Therefore, a novice user may have difficulties fabricating a hand upon contamination of the previous one. We propose the Print-N-Grip (PNG) hand which is a tendon-based underactuated mechanism able to adapt to the shape of objects. The hand is fabricated through one-shot 3D printing with no additional engineering effort, and can accommodate a number of fingers as desired by the practitioner. Due to its low cost, the PNG hand can easily be detached from a universal base for disposing upon contamination, and replaced by a newly printed one. In addition, the PNG hand is scalable such that one can effortlessly resize the computerized model and print. We present the design of the PNG hand along with experiments to show the capabilities and high durability of the hand.
The dχ-STENCIL mechanism introduces a method for token-level privacy in Large Language Models by integrating contextual information from neighboring tokens with dχ-differential privacy noise injection. This approach achieves a superior balance between maintaining LLM utility on downstream tasks and protecting user privacy, exhibiting lower reconstruction rates than prior methods while providing formal privacy guarantees.
We discuss recent advances in the development of cryogenic gaseous photomultipliers (GPM), for possible use in dark matter and other rare-event searches using noble-liquid targets. We present results from a 10 cm diameter GPM coupled to a dual-phase liquid xenon (LXe) TPC, demonstrating - for the first time - the feasibility of recording both primary ("S1") and secondary ("S2") scintillation signals. The detector comprised a triple Thick Gas Electron Multiplier (THGEM) structure with cesium iodide photocathode on the first element; it was shown to operate stably at 180 K with gains above 10^5, providing high single-photon detection efficiency even in the presence of large alpha particle-induced S2 signals comprising thousands of photoelectrons. S1 scintillation signals were recorded with a time resolution of 1.2 ns (RMS). The energy resolution ({\sigma}/E) for S2 electroluminescence of 5.5 MeV alpha particles was ~9%, which is comparable to that obtained in the XENON100 TPC with PMTs. The results are discussed within the context of potential GPM deployment in future multi-ton noble-liquid detectors.
The usage of electronic devices increases, and becomes predominant in most aspects of life. Surface Mount Technology (SMT) is the most common industrial method for manufacturing electric devices in which electrical components are mounted directly onto the surface of a Printed Circuit Board (PCB). Although the expansion of electronic devices affects our lives in a productive way, failures or defects in the manufacturing procedure of those devices might also be counterproductive and even harmful in some cases. It is therefore desired and sometimes crucial to ensure zero-defect quality in electronic devices and their production. While traditional Image Processing (IP) techniques are not sufficient to produce a complete solution, other promising methods like Deep Learning (DL) might also be challenging for PCB inspection, mainly because such methods require big adequate datasets which are missing, not available or not updated in the rapidly growing field of PCBs. Thus, PCB inspection is conventionally performed manually by human experts. Unsupervised Learning (UL) methods may potentially be suitable for PCB inspection, having learning capabilities on the one hand, while not relying on large datasets on the other. In this paper, we introduce ChangeChip, an automated and integrated change detection system for defect detection in PCBs, from soldering defects to missing or misaligned electronic elements, based on Computer Vision (CV) and UL. We achieve good quality defect detection by applying an unsupervised change detection between images of a golden PCB (reference) and the inspected PCB under various setting. In this work, we also present CD-PCB, a synthesized labeled dataset of 20 pairs of PCB images for evaluation of defect detection algorithms.
The micro-structure of most of the engineering alloys contains some inclusions and precipitates, which may affect their properties, therefore it is crucial to characterize them. In this work we focus on the development of a state-of-the-art artificial intelligence model for Anomaly Detection named MLography to automatically quantify the degree of anomaly of impurities in alloys. For this purpose, we introduce several anomaly detection measures: Spatial, Shape and Area anomaly, that successfully detect the most anomalous objects based on their objective, given that the impurities were already labeled. The first two measures quantify the degree of anomaly of each object by how each object is distant and big compared to its neighborhood, and by the abnormally of its own shape respectively. The last measure, combines the former two and highlights the most anomalous regions among all input images, for later (physical) examination. The performance of the model is presented and analyzed based on few representative cases. We stress that although the models presented here were developed for metallography analysis, most of them can be generalized to a wider set of problems in which anomaly detection of geometrical objects is desired. All models as well as the data-set that was created for this work, are publicly available at: this https URL
The detection of astrophysical Gamma-Ray Bursts (GRBs) has always been intertwined with the challenge of identifying the direction of the source. Accurate angular localization of better than a degree has been achieved to date only with heavy instruments on large satellites, and a limited field of view. The recent discovery of the association of GRBs with neutron star mergers gives new motivation for observing the entire γ\gamma-ray sky at once with high sensitivity and accurate directional capability. We present a novel γ\gamma-ray detector concept, which utilizes the mutual occultation between many small scintillators to reconstruct the GRB direction. We built an instrument with 90 (9\,mm)3^3 \csi~scintillator cubes attached to silicon photomultipliers. Our laboratory prototype tested with a 60\,keV source demonstrates an angular accuracy of a few degrees for \sim25 ph\,cm2^{-2} bursts. Simulations of realistic GRBs and background show that the achievable angular localization accuracy with a similar instrument occupying 11l volume is &lt;2^\circ. The proposed concept can be easily scaled to fit into small satellites, as well as large missions.
We report the first measurement of the \eep three-body breakup reaction cross sections in helium-3 (3^3He) and tritium (3^3H) at large momentum transfer (Q21.9\langle Q^2 \rangle \approx 1.9 (GeV/c)2^2) and xB>1x_B>1 kinematics, where the cross section should be sensitive to quasielastic (QE) scattering from single nucleons. The data cover missing momenta 40pmiss50040 \le p_{miss} \le 500 MeV/c that, in the QE limit with no rescattering, equals the initial momentum of the probed nucleon. The measured cross sections are compared with state-of-the-art ab-initio calculations. Overall good agreement, within ±20%\pm20\%, is observed between data and calculations for the full pmissp_{miss} range for 3^3H and for 100pmiss350100 \le p_{miss} \le 350 MeV/c for 3^3He. Including the effects of rescattering of the outgoing nucleon improves agreement with the data at pmiss>250p_{miss} > 250 MeV/c and suggests contributions from charge-exchange (SCX) rescattering. The isoscalar sum of 3^3He plus 3^3H, which is largely insensitive to SCX, is described by calculations to within the accuracy of the data over the entire pmissp_{miss} range. This validates current models of the ground state of the three-nucleon system up to very high initial nucleon momenta of 500500 MeV/c.
Measurements of the EMC effect in the tritium and helium-3 mirror nuclei are reported. The data were obtained by the MARATHON Jefferson Lab experiment, which performed deep inelastic electron scattering from deuterium and the three-body nuclei, using a cryogenic gas target system and the High Resolution Spectrometers of the Hall A Facility of the Lab. The data cover the Bjorken xx range from 0.20 to 0.83, corresponding to a squared four-momentum transfer Q2Q^2 range from 2.7 to 11.9\gevsq11.9\gevsq, and to an invariant mass WW of the final hadronic state greater than 1.84 GeV/c2{\it c}^2. The tritium EMC effect measurement is the first of its kind. The MARATHON experimental results are compared to results from previous measurements by DESY-HERMES and JLab-Hall C experiments, as well as with few-body theoretical predictions.
We present the results of the second commissioning phase of the short-focal-length area of the Apollon laser facility (located in Saclay, France), which was performed with the main laser beam (F1), scaled to a peak power of 2 PetaWatt. Under the conditions that were tested, this beam delivered on-target pulses of maximum energy up to 45 J and 22 fs duration. Several diagnostics were fielded to assess the performance of the facility. The on-target focal spot and its spatial stability, as well as the secondary sources produced when irradiating solid targets, have all been characterized, with the goal of helping users design future experiments. The laser-target interaction was characterized, as well as emissions of energetic ions, X-ray and neutrons recorded, all showing good laser-to-target coupling efficiency. Moreover, we demonstrated the simultaneous fielding of F1 with the auxiliary 0.5 PW F2 beam of Apollon, enabling dual beam operation. The present commissioning will be followed in 2025 by a further commissioning stage of F1 at the 8 PW level, en route to the final 10 PW goal.
There is an ever-present need for shared memory parallelization schemes to exploit the full potential of multi-core architectures. The most common parallelization API addressing this need today is OpenMP. Nevertheless, writing parallel code manually is complex and effort-intensive. Thus, many deterministic source-to-source (S2S) compilers have emerged, intending to automate the process of translating serial to parallel code. However, recent studies have shown that these compilers are impractical in many scenarios. In this work, we combine the latest advancements in the field of AI and natural language processing (NLP) with the vast amount of open-source code to address the problem of automatic parallelization. Specifically, we propose a novel approach, called OMPify, to detect and predict the OpenMP pragmas and shared-memory attributes in parallel code, given its serial version. OMPify is based on a Transformer-based model that leverages a graph-based representation of source code that exploits the inherent structure of code. We evaluated our tool by predicting the parallelization pragmas and attributes of a large corpus of (over 54,000) snippets of serial code written in C and C++ languages (Open-OMP-Plus). Our results demonstrate that OMPify outperforms existing approaches, the general-purposed and popular ChatGPT and targeted PragFormer models, in terms of F1 score and accuracy. Specifically, OMPify achieves up to 90% accuracy on commonly-used OpenMP benchmark tests such as NAS, SPEC, and PolyBench. Additionally, we performed an ablation study to assess the impact of different model components and present interesting insights derived from the study. Lastly, we also explored the potential of using data augmentation and curriculum learning techniques to improve the model's robustness and generalization capabilities.
In high-performance computing (HPC), the demand for efficient parallel programming models has grown dramatically since the end of Dennard Scaling and the subsequent move to multi-core CPUs. OpenMP stands out as a popular choice due to its simplicity and portability, offering a directive-driven approach for shared-memory parallel programming. Despite its wide adoption, however, there is a lack of comprehensive data on the actual usage of OpenMP constructs, hindering unbiased insights into its popularity and evolution. This paper presents a statistical analysis of OpenMP usage and adoption trends based on a novel and extensive database, HPCORPUS, compiled from GitHub repositories containing C, C++, and Fortran code. The results reveal that OpenMP is the dominant parallel programming model, accounting for 45% of all analyzed parallel APIs. Furthermore, it has demonstrated steady and continuous growth in popularity over the past decade. Analyzing specific OpenMP constructs, the study provides in-depth insights into their usage patterns and preferences across the three languages. Notably, we found that while OpenMP has a strong "common core" of constructs in common usage (while the rest of the API is less used), there are new adoption trends as well, such as simd and target directives for accelerated computing and task for irregular parallelism. Overall, this study sheds light on OpenMP's significance in HPC applications and provides valuable data for researchers and practitioners. It showcases OpenMP's versatility, evolving adoption, and relevance in contemporary parallel programming, underlining its continued role in HPC applications and beyond. These statistical insights are essential for making informed decisions about parallelization strategies and provide a foundation for further advancements in parallel programming models and techniques.
It is widely appreciated, due to Bell's theorem, that quantum phenomena are inconsistent with local-realist models. In this context, locality refers to local causality, and there is thus an open possibility for reproducing the quantum predictions with models which internally violate the causal arrow of time, while otherwise adhering to the relevant locality condition. So far, this possibility has been demonstrated only at a toy-model level, and only for systems involving one or two spins (or photons). The present work extends one of these models to quantum correlations between three or more spins which are entangled in the Greenberger-Horne-Zeilinger state.
We study the phenomena of radiative-driven shock waves using a semi-analytic model based on self similar solutions of the radiative hydrodynamic problem. The relation between the hohlraum drive temperature TRadT_{\mathrm{Rad}} and the resulting ablative shock DSD_S is a well-known method for the estimation of the drive temperature. However, the various studies yield different scaling relations between TRadT_{\mathrm{Rad}} and DSD_S, based on different simulations. In [T. Shussman and S.I. Heizler, Phys. Plas., 22, 082109 (2015)] we have derived full analytic solutions for the subsonic heat wave, that include both the ablation and the shock wave regions. Using this self-similar approach we derive here the TRad(DS)T_{\mathrm{Rad}}(D_S) relation for aluminium, using the detailed Hugoniot relations and including transport effects. By our semi-analytic model, we find a spread of 40\approx 40eV in the TRad(DS)T_{\mathrm{Rad}}(D_S) curve, as a function of the temperature profile's duration and its temporal profile. Our model agrees with the various experiments and the simulations data, explaining the difference between the various scaling relations that appear in the literature.
Magnetic confinement fusion reactors produce ash particles that must be removed for efficient operation. It is suggested to use autoresonance (a continuous phase-locking between anharmonic motion and a chirped drive) to remove the ash particles from a magnetic mirror, the simplest magnetic confinement configuration. An analogy to the driven pendulum is established via the guiding center approximation. The full 3D dynamics is simulated for α\alpha particles (the byproduct of DT fusion) in agreement with the approximated 1D model. Monte Carlo simulations sampling the phase space of initial conditions are used to quantify the efficiency of the method. The DT fuel particles are out of the bandwidth of the chirped drive and, therefore, stay in the mirror for ongoing fusion. The method is also applicable for advanced, aneutronic reactors, such as p-11^{11}B.
Inclusive electron scattering at carefully chosen kinematics can isolate scattering from the high-momentum nucleons in short-range correlations (SRCs). SRCs are produced by the hard, short-distance interactions of nucleons in the nucleus, and because the two-nucleon (2N) SRCs arise from the same N-N interaction in all nuclei, the cross section in the SRC-dominated regime is identical up to an overall scaling factor. This scaling behavior has been used to identify SRC dominance and to measure the contribution of SRCs in a wide range of nuclei. We examine this scaling behavior over a range of momentum transfers using new data on 2^2H, 3^3H, and 3^3He, and find an expanded scaling region compared to heavy nuclei. Motivated by this improved scaling, we examine the 3^3H and 3^3He data in kinematics where three-nucleon SRCs may play an important role. The data for the largest struck nucleon momenta are consistent with isolation of scattering from three-nucleon SRCs, and suggest that the very highest momentum nucleons in 3^3He have a nearly isospin-independent momentum configuration.
The neutrino research program in the coming decades will require improved precision. A major source of uncertainty is the interaction of neutrinos with nuclei that serve as targets for such experiments. Broadly speaking, this interaction often depends, e.g., for charge-current quasi-elastic scattering, on the combination of ``nucleon physics", expressed by form factors, and ``nuclear physics", expressed by a nuclear model. It is important to get a good handle on both. We present a fully analytic implementation of the Correlated Fermi Gas Model for electron-nucleus and charge-current quasi-elastic neutrino-nucleus scattering. The implementation is used to compare separately form factors and nuclear model effects for both electron-carbon and neutrino-carbon scattering data.
Using language models as a remote service entails sending private information to an untrusted provider. In addition, potential eavesdroppers can intercept the messages, thereby exposing the information. In this work, we explore the prospects of avoiding such data exposure at the level of text manipulation. We focus on text classification models, examining various token mapping and contextualized manipulation functions in order to see whether classifier accuracy may be maintained while keeping the original text unrecoverable. We find that although some token mapping functions are easy and straightforward to implement, they heavily influence performance on the downstream task, and via a sophisticated attacker can be reconstructed. In comparison, the contextualized manipulation provides an improvement in performance.
The need to immobilize low-level nuclear waste, in particular 137Cs-bearing waste, has led to a growing interest in geopolymer-based waste matrices, in addition to optimization attempts of cement matrix compositions for this specific application. Although the overall phase composition and structure of these matrices are well characterized, the binding sites of Cs in these materials have not been clearly identified. Recent studies have suggested that combining the sensitivity of solid-state Nuclear Magnetic Resonance (SSNMR) to the local atomic structure with other structural techniques provides insights into the mode of Cs binding and release. Density Functional Theory (DFT) can provide the connection between spectroscopic parameters and geometric properties. However, the reliability of DFT results strongly relies on the choice of a suitable exchange-correlation functional, which for 133Cs, the NMR surrogate for such studies, is not well-established. In this work we benchmark various functionals against their performance in predicting the geometry of various simple Cs compounds, their NMR quadrupolar coupling constants, and their chemical shift values, while prioritizing the ability to incorporate dispersion interactions and maintaining low computational cost. We examined Cs salts, Cs oxides, perovskites, caged materials, a borate glass and a cesium fluoroscandate. While no single functional performs equally well for all parameters, the results show rev-vdW-DF2 and PBEsol+D3 to be leading candidates for these systems, in particular with respect to geometry and chemical shifts, which are of high importance for Cs-immobilization matrices.
There are no more papers matching your filters at the moment.