A paper reintroduces Eugenio Beltrami’s 19th-century “geometrical” method for integrating geodesic equations, applying it to derive general solutions for geodesics in Schwarzschild and Kerr spacetimes. This approach consistently reproduces known results while offering a purely geometric perspective that aligns with General Relativity’s foundational principles.
Given their increasing size and complexity, the need for efficient execution of deep neural networks has become increasingly pressing in the design of heterogeneous High-Performance Computing (HPC) and edge platforms, leading to a wide variety of proposals for specialized deep learning architectures and hardware accelerators. The design of such architectures and accelerators requires a multidisciplinary approach combining expertise from several areas, from machine learning to computer architecture, low-level hardware design, and approximate computing. Several methodologies and tools have been proposed to improve the process of designing accelerators for deep learning, aimed at maximizing parallelism and minimizing data movement to achieve high performance and energy efficiency. This paper critically reviews influential tools and design methodologies for Deep Learning accelerators, offering a wide perspective in this rapidly evolving field. This work complements surveys on architectures and accelerators by covering hardware-software co-design, automated synthesis, domain-specific compilers, design space exploration, modeling, and simulation, providing insights into technical challenges and open research directions.
In this work, we have grown \sim100 nm thick pristine FeSe films by pulsed laser deposition. The films were structurally characterized with X-ray diffraction and their surface morphology checked through atomic force microscopy. Microwave measurements, performed with a dielectric loaded resonator tuned at the frequency of 8 GHz, allowed the characterization of the samples surface resistance, in view of potential applications in microwave haloscopes for dark matter search. Here, we report the comparison of the microwave properties of FeSe with Fe(Se,Te) thin films, as the temperature is swept from 4 K to 20 K. By applying a constant static magnetic field of 12 T, it was also possible to discern the magnetic field resilience of the two samples. FeSe showed a larger critical temperature drift as the field is applied, while the Fe(Se,Te) response broadens remarkably less. A preliminary analysis of vortex pinning shows margins for optimizing pinning in FeSe.
We analyze an axisymmetric equilibrium of a plasma endowed with toroidal and poloidal velocity fields, with the aim to characterize the influence of the global motion on the morphology of the magnetic confinement. We construct our configuration assuming that the poloidal velocity field is aligned with the poloidal magnetic field lines and, furthermore, we require that the plasma mass density depend on the magnetic flux function (or equivalently, that the plasma fluid be incompressible). We then derive a sort of Grad-Shafranov equation for such an equilibrium and implement it to tokamak relevant situations, with particular reference to TCV-like profiles. The main result of the present study concerns the emergence, in configurations associated to a double-null profile, of a closed surface of null pressure encorporating the two X-points of the magnetic configuration. This scenario suggests the possible existence of a new regime of the plasma equilibrium, corresponding to an improved plasma confinement near the X-points and a consequent reduced power transfer to the tokamak divertor.
Polarimetry and optical imaging techniques face challenges in photon-starved scenarios, where the low number of detected photons imposes a trade-off between image resolution, integration time, and sample sensitivity. Here we introduce a quantum-inspired method, functional classical shadows, for reconstructing a polarization profile in the low photon-flux regime. Our method harnesses correlations between neighbouring datapoints, based on the recent realisation that machine learning can estimate multiple physical quantities from a small number of non-identical samples. This is applied to the experimental reconstruction of polarization as a function of the wavelength. Although the quantum formalism helps structuring the problem, our approach suits arbitrary intensity regimes.
This work explores entropy analysis as a tool for probing information distribution within Transformer-based architectures. By quantifying token-level uncertainty and examining entropy patterns across different stages of processing, we aim to investigate how information is managed and transformed within these models. As a case study, we apply the methodology to a GPT-based large language model, illustrating its potential to reveal insights into model behavior and internal representations. This approach may offer insights into model behavior and contribute to the development of interpretability and evaluation frameworks for transformer-based models
Inertial confinement fusion requires a constant search for the most effective materials for improving the efficiency of the compression of the capsule and of the laser-to-target energy transfer. Foams could provide a solution to these problems, but they require further experimental and theoretical investigation. The new 3D-printing technologies, such as the two-photon polymerization, are opening a new era in the production of foams, allowing for the fine control of the material morphology. Detailed studies of their interaction with high-power lasers in regimes relevant for inertial confinement fusion are very few in the literature so far and more investigation is needed. In this work we present the results an experimental campaign performed at the ABC laser facility in ENEA Centro Ricerche Frascati where 3D-printed micro-structured materials were irradiated at high power. 3D simulations of the laser-target interaction performed with the FLASH code reveal a strong scattering when the center of the focal spot is on the through hole of the structure. The time required for the laser to completely ablate the structure obtained by the simulations is in good agreement with the experimental measurement. The measure of the reflected and transmitted laser light indicates that the scattering occurred during the irradiation, in accordance with the simulations. Two-plasmon decay has also been found to be active during irradiation.
The 1888 paper by Salvatore Pincherle (Professor of Mathematics at the University of Bologna) on generalized hypergeometric functions is revisited. We point out the pioneering contribution of the Italian mathematician towards the Mellin-Barnes integrals based on the duality principle between linear differential equations and linear difference equation with rational coefficients. By extending the original arguments used by Pincherle, we also show how to formally derive the linear differential equation and the Mellin-Barnes integral representation of the Meijer G-functions.
This study addresses the challenge of access point (AP) and user equipment (UE) association in cell-free massive MIMO networks. It introduces a deep learning algorithm leveraging Bidirectional Long Short-Term Memory cells and a hybrid probabilistic methodology for weight updating. This approach enhances scalability by adapting to variations in the number of UEs without requiring retraining. Additionally, the study presents a training methodology that improves scalability not only with respect to the number of UEs but also to the number of APs. Furthermore, a variant of the proposed AP-UE algorithm ensures robustness against pilot contamination effects, a critical issue arising from pilot reuse in channel estimation. Extensive numerical results validate the effectiveness and adaptability of the proposed methods, demonstrating their superiority over widely used heuristic alternatives.
This paper is focalized on the limit application of judo throws, by tactics at first contact time, with some astonishing information at a first seeing, but biomechanically grounded, not often applied or because against the sound common sense or out the old oral judo tradition. To do so we provide an appraisal of the grips concept and his consequences in the Olympic sport judo from a biomechanics perspective, we will try to deeper both the concept and the function of grips and define the potential application of some throws without grips. Broadening this situation we try to underline some specific throwing situation in which grips are or not at all applied or applied in non conventional way. We describe at first the problem from the theoretical point of view. And as second point we try to find practical application, original or already developed in high level competitions. The provocative words Judo without grips or throw without grips are connected to the limit application of some biomechanical tricks, grounded on two well known physical principles: the time advance in the attack, in Japanese Sen no Sen (already applied in real competitions), and the utilization of the own inertia connected to high attack speed to apply in totally original way one of the two biomechanical tools utilized to throw the human body.
This paper presents a method for task allocation and trajectory generation in cooperative inspection missions using a fleet of multirotor drones, with a focus on wind turbine inspection. The approach generates safe, feasible flight paths that adhere to time-sensitive constraints and vehicle limitations by formulating an optimization problem based on Signal Temporal Logic (STL) specifications. An event-triggered replanning mechanism addresses unexpected events and delays, while a generalized robustness scoring method incorporates user preferences and minimizes task conflicts. The approach is validated through simulations in MATLAB and Gazebo, as well as field experiments in a mock-up scenario.
The background magnetic geometry at the edge of a tokamak plasma has to be designed in order to mitigate the particle and energy looses essentially due to turbulent transport. The Divertor-Tokamak-Test (DTT) facility under construction at ENEA Frascati will test several magnetic configurations and mitigation strategies, that are usually based on the realization of nontrivial topologies in which one or more X points are present. In order to get a clear understanding of turbulent transport near one of such X points, we perform 3D electro-static fluid simulations of tokamak edge plasma for a DTT-like scenario. We will outline: i) the resulting turbulent spectral features and their dependence on some model parameters (the background pressure gradients and diffusivity) and on the magnetic geometry through a comparative analysis with the results of the companion paper [19], ii) the connection between small scale poloidal structures and toroidal asymmetries, iii) the formation of quiescent regions, iv)the crucial role of radial Dirichlet boundary conditions for the excitation of zonal flows that can screen the radial component of the magnetic geometry.
Modern cosmological research still thoroughly debates the discrepancy between local probes and the Cosmic Microwave Background observations in the Hubble constant (\texorpdfstring{H0H_0}{H0}) measurements, ranging from 4 to 6σ\sigma. In the current study, we examine this tension using the Supernovae Ia (SNe Ia) data from the Pantheon, Pantheon+ (P+), Joint Lightcurve Analysis (JLA), and Dark Energy Survey, (DES) catalogs combined together into the so-called Master Sample. The sample contains 3714 SNe Ia, and is divided all of them into redshift-ordered bins. Three binning techniques are presented: the equi-population, the moving window (MW), and the equi-spacing in the \texorpdfstring{logz\log-z}{log-z}. We perform a Markov-Chain Monte Carlo analysis (MCMC) for each bin to determine the H0H_0 value, estimating it within the standard flat \texorpdfstring{Λ\LambdaCDM}{LCDM} and the \texorpdfstring{w0waw_{0}w_{a}CDM}{w0waCDM} models. These \texorpdfstring{H0H_0}{H0} values are then fitted with the following phenomenological function: \texorpdfstring{$\mathcal{H}_0(z) = \tilde{H}_0 / (1 + z)^\alpha$}{H0(z) = H0tilde / (1 + z)^alpha}, where \texorpdfstring{H~0\tilde{H}_0}{H0tilde} is a free parameter representing \texorpdfstring{H0(z)\mathcal{H}_0(z)}{H0(z)} fitted in \texorpdfstring{z=0z=0}{z=0}, and \texorpdfstring{α\alpha}{alpha} is the evolutionary parameter. Our results indicate a decreasing trend characterized by \texorpdfstring{α0.01\alpha \sim 0.01}{alpha ~ 0.01}, whose consistency with zero ranges from 1σ1 \sigma in 5 cases to 1 case at 3 σ\sigma and 11 cases at >3σ> 3 \sigma in several samples and configurations. Such a trend in the SNe Ia catalogs could be due to evolution with redshift for the astrophysical variables or unveiled selection biases. Alternatively, intrinsic physics, possibly the \texorpdfstring{f(R)f(R)}{f(R)} theory of gravity, could be responsible for this trend.
The use of operator methods of algebraic nature is shown to be a very powerful tool to deal with different forms of relativistic wave equations. The methods provide either exact or approximate solutions for various forms of differential equations, such as relativistic Schrödinger, Klein-Gordon and Dirac. We discuss the free particle hypotheses and those relevant to particles subject to non-trivial potentials. In the latter case we will show how the proposed method leads to easily implementable numerical algorithms.
The GERDA experiment at the Laboratori Nazionali del Gran Sasso (LNGS) searches for the neutrinoless double beta decay of 76-Ge. In view of the GERDA Phase II data collection, four new 228-Th radioactive sources for the calibration of the germanium detectors enriched in 76-Ge have been produced with a new technique, leading to a reduced neutron flux from ( alpha; n ) reactions. The gamma activities of the sources were determined with a total uncertainty of 4 percent using an ultra-low background HPGe detector operated underground at LNGS. The emitted neutron flux was determined using a low background LiI(Eu) detector and a 3-He counter at LNGS. In both cases, a reduction of about one order of magnitude with respect to commercially available 228-Th sources was obtained. Additionally, a specific leak test with a sensitivity to leaks down to 10 mBq was developed to investigate the tightness of the stainless steel capsules housing the sources after their use in cryogenic environment.
Word representation is fundamental in NLP tasks, because it is precisely from the coding of semantic closeness between words that it is possible to think of teaching a machine to understand text. Despite the spread of word embedding concepts, still few are the achievements in linguistic contexts other than English. In this work, analysing the semantic capacity of the Word2Vec algorithm, an embedding for the Italian language is produced. Parameter setting such as the number of epochs, the size of the context window and the number of negatively backpropagated samples is explored.
Riptide is a detector concept aiming to track fast neutrons. It is based on neutron--proton elastic collisions inside a plastic scintillator, where the neutron momentum can be measured by imaging the scintillation light. More specifically, by stereoscopically imaging the recoil proton tracks, the proposed apparatus provides neutron spectrometry capability and enable the online analysis of the specific energy loss along the track. In principle, the spatial and topological event reconstruction enables particle discrimination, which is a crucial property for neutron detectors. In this contribution, we report the advances on the Riptide detector concept. In particular, we have developed a Geant4 optical simulation to demonstrate the possibility of reconstructing with sufficient precision the tracks and the vertices of neutron interactions inside a plastic scintillator. To realistically model the optics of the scintillation detector, mono-energetic protons were generated inside a 6×6×66\times6\times6 cm3^3 cubic BC-408 scintillator, and the produced optical photons were propagated and then recorded on a scoring plane corresponding to the surfaces of the cube. The photons were then transported through an optical system to a 2×22\times2 cm2^2 photo sensitive area with 1 Megapixel. Moreover, we have developed two different analysis procedures to reconstruct 3D tracks: one based on data fitting and one on Principal Component Analysis. The main results of this study will be presented with a particular focus on the role of the optical system and the attainable spatial and energy resolution.
The Fusion Evaluated Nuclear Data Library (FENDL) is a comprehensive and validated collection of nuclear cross section data coordinated by the International Atomic Energy Agency (IAEA) Nuclear Data Section (NDS). FENDL assembles the best nuclear data for fusion applications selected from available nuclear data libraries and has been under development for decades. FENDL contains sub-libraries for incident neutron, proton, and deuteron cross sections including general purpose and activation files used for particle transport and nuclide inventory calculations. We describe the history, selection of evaluations for the various sub-libraries (neutron, proton, deuteron) with the focus on transport and reactor dosimetry applications, the processing of the nuclear data for application codes, and the development of the TENDL-2017 library which is the currently recommended activation library for FENDL. We briefly describe the IAEA IRDFF library as the recommended library for dosimetry fusion applications. We also present work on validation of the neutron sub-library using a variety of fusion relevant computational and experimental benchmarks. A variety of cross section libraries are used for the validation work including FENDL-2.1, FENDL-3.1d, FENDL-3.2, ENDF/B-VIII.0, and JEFF-3.2 with the emphasis on the FENDL libraries. The results of the experimental validation showed that the performance of FENDL-3.2b is at least as good and in most cases better than FENDL-2.1. Future work will consider improved evaluations developed by the International Nuclear Data Evaluation Network (INDEN). Additional work will be needed to investigate differences in gas production in structural materials. Covariance matrices need to be updated to support the development of fusion technology. Additional validation work for high-energy neutrons, protons and deuterons, and the activation library will be needed.
We demonstrate analytically that, in toroidal plasmas, scattering by drift wave turbulence could lead to appreciable damping of toroidal Alfven eigenmodes via generation of short-wavelength electron Landau damped kinetic Alfven waves. A corresponding analytic expression of the damping rate is derived, and found to be, typically, comparable to the linear drive by energetic particles. The implications of this novel mechanism on the transport and heating processes in burning plasmas are also discussed.
We analyse a n-dimensional Generalized Uncertainty Principle (GUP) quantization framework, characterized by a non-commutative nature of the configurational variables. First, we identify a set of states which are maximally localized only along a single direction, at the expense of being less localized in all the other ones. Subsequently, in order to recover information about localization on the whole configuration space, we use the only state of the theory which exhibits maximal localization simultaneously in every direction to construct a satisfactory quasi-position representation, by virtue of a suitable translational operator. The resultant quantum framework is then applied to model the dynamics of the Bianchi I cosmology. The corresponding Wheeler-DeWitt equation is reduced to Schr\"odinger dynamics for the two anisotropy degrees of freedom, using a WKB representation for the volume-like variable of the Universe, in accordance with the Vilenkin scenario. The main result of our cosmological implementation of the constructed quantum theory demonstrates how the dynamics of a wave packet peaked at some point in the configuration space represented in the quasi-position variables, favours as the most probable configuration exactly the initial one for a relatively long time, if compared with the ordinary quantum theory. This preference arises from the distinct behavioral dynamics exhibited by wave packets in the two quantum theories.
There are no more papers matching your filters at the moment.