Università degli Studi dell’Aquila
Being predominant in digital entertainment for decades, video games have been recognized as valuable software artifacts by the software engineering (SE) community just recently. Such an acknowledgment has unveiled several research opportunities, spanning from empirical studies to the application of AI techniques for classification tasks. In this respect, several curated game datasets have been disclosed for research purposes even though the collected data are insufficient to support the application of advanced models or to enable interdisciplinary studies. Moreover, the majority of those are limited to PC games, thus excluding notorious gaming platforms, e.g., PlayStation, Xbox, and Nintendo. In this paper, we propose PlayMyData, a curated dataset composed of 99,864 multi-platform games gathered by IGDB website. By exploiting a dedicated API, we collect relevant metadata for each game, e.g., description, genre, rating, gameplay video URLs, and screenshots. Furthermore, we enrich PlayMyData with the timing needed to complete each game by mining the HLTB website. To the best of our knowledge, this is the most comprehensive dataset in the domain that can be used to support different automated tasks in SE. More importantly, PlayMyData can be used to foster cross-domain investigations built on top of the provided multimedia data.
New gauge bosons coupled to heavy neutral leptons (HNLs) are simple and well-motivated extensions of the Standard Model. In searches for HNLs in proton fixed-target experiments, we find that a large population of gauge bosons (ZZ^\prime) produced by proton bremsstrahlung may decay to HNLs, leading to a significant improvement in existing bounds on the (mHNL,Uαm_{HNL}, U_{\alpha}), where UαU_\alpha represent the mixing between HNL and the active neutrinos with flavor α\alpha. We study this possibility in fixed target experiments with the 8 GeV proton beams, including SBND, MicroBooNE, and ICARUS, as well as DUNE and DarkQuest at 120 GeV. We find the projected sensitivities to additional ZZ^\prime-mediated HNL production can bring the seesaw mechanism of the neutrino masses within a broadened experimental reach.
We investigate properties of the (conditional) law of the solution to SDEs driven by fractional Brownian noise with a singular, possibly distributional, drift. Our results on the law are twofold: i) we quantify the spatial regularity of the law, while keeping track of integrability in time, and ii) we prove that it has a density with Gaussian tails. Then the former result is used to establish novel results on existence and uniqueness of solutions to McKean-Vlasov equations of convolutional type.
Model-Driven Engineering (MDE) has seen significant advancements with the integration of Machine Learning (ML) and Deep Learning (DL) techniques. Building upon the groundwork of previous investigations, our study provides a concise overview of current Language Large Models (LLMs) applications in MDE, emphasizing their role in automating tasks like model repository classification and developing advanced recommender systems. The paper also outlines the technical considerations for seamlessly integrating LLMs in MDE, offering a practical guide for researchers and practitioners. Looking forward, the paper proposes a focused research agenda for the future interplay of LLMs and MDE, identifying key challenges and opportunities. This concise roadmap envisions the deployment of LLM techniques to enhance the management, exploration, and evolution of modeling ecosystems. By offering a compact exploration of LLMs in MDE, this paper contributes to the ongoing evolution of MDE practices, providing a forward-looking perspective on the transformative role of Language Large Models in software engineering and model-driven practices.
This survey is a chapter of a forthcoming book. This chapter recalls the classical formulation of the Div-Curl lemma along with its proof, and presents some possible generalizations in the fractional setting, within the framework of the Riesz fractional gradient and divergence introduced by Shieh and Spector (2015) and further developed by Comi and Stefani (2019).
Cosmic-ray physics in the GeV-to-TeV energy range has entered a precision era thanks to recent data from space-based experiments. However, the poor knowledge of nuclear reactions, in particular for the production of antimatter and secondary nuclei, limits the information that can be extracted from these data, such as source properties, transport in the Galaxy and indirect searches for particle dark matter. The Cross-Section for Cosmic Rays at CERN workshop series has addressed the challenges encountered in the interpretation of high-precision cosmic-ray data, with the goal of strengthening emergent synergies and taking advantage of the complementarity and know-how in different communities, from theoretical and experimental astroparticle physics to high-energy and nuclear physics. In this paper, we present the outcomes of the third edition of the workshop that took place in 2024. We present the current state of cosmic-ray experiments and their perspectives, and provide a detailed road map to close the most urgent gaps in cross-section data, in order to efficiently progress on many open physics cases, which are motivated in the paper. Finally, with the aim of being as exhaustive as possible, this report touches several other fields -- such as cosmogenic studies, space radiation protection and hadrontherapy -- where overlapping and specific new cross-section measurements, as well as nuclear code improvement and benchmarking efforts, are also needed. We also briefly highlight further synergies between astroparticle and high-energy physics on the question of cross-sections.
Let G=(V,E)G=(V,E) be an nn-nodes non-negatively real-weighted undirected graph. In this paper we show how to enrich a {\em single-source shortest-path tree} (SPT) of GG with a \emph{sparse} set of \emph{auxiliary} edges selected from EE, in order to create a structure which tolerates effectively a \emph{path failure} in the SPT. This consists of a simultaneous fault of a set FF of at most ff adjacent edges along a shortest path emanating from the source, and it is recognized as one of the most frequent disruption in an SPT. We show that, for any integer parameter k1k \geq 1, it is possible to provide a very sparse (i.e., of size O(knf1+1/k)O(kn\cdot f^{1+1/k})) auxiliary structure that carefully approximates (i.e., within a stretch factor of (2k1)(2F+1)(2k-1)(2|F|+1)) the true shortest paths from the source during the lifetime of the failure. Moreover, we show that our construction can be further refined to get a stretch factor of 33 and a size of O(nlogn)O(n \log n) for the special case f=2f=2, and that it can be converted into a very efficient \emph{approximate-distance sensitivity oracle}, that allows to quickly (even in optimal time, if k=1k=1) reconstruct the shortest paths (w.r.t. our structure) from the source after a path failure, thus permitting to perform promptly the needed rerouting operations. Our structure compares favorably with previous known solutions, as we discuss in the paper, and moreover it is also very effective in practice, as we assess through a large set of experiments.
We perform an updated global analysis of the known and unknown parameters of the standard 3ν3\nu framework as of 2025. The known oscillation parameters include three mixing angles (θ12,θ23,θ13)(\theta_{12},\,\theta_{23},\,\theta_{13}) and two squared mass gaps, chosen as \delta m^2=m^2_2-m^2_1>0 and $\Delta m^2=m^2_3-{\textstyle\frac{1}{2}}(m^2_1+m^2_2)$, where α=sign(Δm2)\alpha=\mathrm{sign}(\Delta m^2) distinguishes normal ordering (NO, α=+1\alpha=+1) from inverted ordering (IO, α=1\alpha=-1). With respect to our previous 2021 update, the combination of oscillation data leads to appreciably reduced uncertainties for θ23\theta_{23}, θ13\theta_{13} and Δm2|\Delta m^2|. In particular, Δm2|\Delta m^2| is the first 3ν3\nu parameter to enter the domain of subpercent precision (0.8\% at 1σ1\sigma). We underline some issues about systematics, that might affect this error estimate. Concerning oscillation unknowns, we find a relatively weak preference for NO versus IO (at 2.2σ2.2\sigma), for CP violation versus conservation in NO (1.3σ\sigma) and for the first θ23\theta_{23} octant versus the second in NO (1.1σ1.1\sigma). We discuss the status and qualitative prospects of the mass ordering hint in the plane (δm2,Δmee2)(\delta m^2,\,\Delta m^2_{ee}), where $\Delta m^2_{ee}=|\Delta m^2|+{\textstyle\frac{1}{2}}\alpha(\cos^2\theta_{12}-\sin^2\theta_{12})\delta m^2$, to be measured by the JUNO experiment with subpercent precision. We also discuss upper bounds on nonoscillation observables. We report m_\beta<0.50~eV and m_{\beta\beta}<0.086~eV (2σ2\sigma). Concerning the sum of neutrino masses Σ\Sigma, we discuss representative combinations of data, with or without augmenting the Λ\LambdaCDM model with extra parameters accounting for possible systematics or new physics. The resulting 2σ2\sigma upper limits are roughly spread around the bound \Sigma < 0.2~eV within a factor of three. [Abridged]
In this paper we analyse the spectro-photometric properties of the Type II supernova \sn, exploded in NGC~3206 at a distance of 19.9Mpc19.9\,\rm{Mpc}. Its early spectra are characterised by narrow high-ionisation emission lines, often interpreted as signatures of ongoing interaction between rapidly expanding ejecta and a confined dense circumstellar medium. However, we provide a model for the bolometric light curve of the transient that does not require sources of energy different than radioactive decays and H recombination. Our model can reproduce the bolometric light curve of SN~2024bch adopting an ejected mass of Mbulk5M_{bulk}\simeq5\msun~surrounded by an extended envelope of only 0.2\msun~with an outer radius Renv=7.0×1013cmR_{env}=7.0\times10^{13}\,\rm{cm}. An accurate modelling focused on the radioactive part of the light curve, which accounts for incomplete γ\gamma-ray trapping, gives a 56Ni^{56}\rm{Ni} mass of 0.048\msun. We propose narrow lines to be powered by Bowen fluorescence induced by scattering of \ion{He}{II} Lyα\alpha photons, resulting in the emission of high-ionisation resonance lines. Simple light travel time calculations based on the maximum phase of the narrow emission lines place the inner radius of the H-rich, un-shocked shell at a radius 4.4×1015cm\simeq4.4\times10^{15}\,\rm{cm}, compatible with an absence of ejecta-CSM interaction during the first weeks of evolution. Possible signatures of interaction appear only 69days\sim69\,\rm{days} after explosion, although the resulting conversion of kinetic energy into radiation does not seem to contribute significantly to the total luminosity of the transient.
Model predictive control (MPC) can provide significant energy cost savings in building operations in the form of energy-efficient control with better occupant comfort, lower peak demand charges, and risk-free participation in demand response. However, the engineering effort required to obtain physics-based models of buildings is considered to be the biggest bottleneck in making MPC scalable to real buildings. In this paper, we propose a data-driven control algorithm based on neural networks to reduce this cost of model identification. Our approach does not require building domain expertise or retrofitting of existing heating and cooling systems. We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy. We learn dynamical models of energy consumption and zone temperatures with high accuracy and demonstrate energy savings and better occupant comfort compared to the default system controller.
Consensus and Broadcast are two fundamental problems in distributed computing, whose solutions have several applications. Intuitively, Consensus should be no harder than Broadcast, and this can be rigorously established in several models. Can Consensus be easier than Broadcast? In models that allow noiseless communication, we prove a reduction of (a suitable variant of) Broadcast to binary Consensus, that preserves the communication model and all complexity parameters such as randomness, number of rounds, communication per round, etc., while there is a loss in the success probability of the protocol. Using this reduction, we get, among other applications, the first logarithmic lower bound on the number of rounds needed to achieve Consensus in the uniform GOSSIP model on the complete graph. The lower bound is tight and, in this model, Consensus and Broadcast are equivalent. We then turn to distributed models with noisy communication channels that have been studied in the context of some bio-inspired systems. In such models, only one noisy bit is exchanged when a communication channel is established between two nodes, and so one cannot easily simulate a noiseless protocol by using error-correcting codes. An Ω(ϵ2n)\Omega(\epsilon^{-2} n) lower bound on the number of rounds needed for Broadcast is proved by Boczkowski et al. [PLOS Comp. Bio. 2018] in one such model (noisy uniform PULL, where ϵ\epsilon is a parameter that measures the amount of noise). In such model, we prove a new Θ(ϵ2nlogn)\Theta(\epsilon^{-2} n \log n) bound for Broadcast and a Θ(ϵ2logn)\Theta(\epsilon^{-2} \log n) bound for binary Consensus, thus establishing an exponential gap between the number of rounds necessary for Consensus versus Broadcast.
A tree σ\sigma-spanner of a positively real-weighted nn-vertex and mm-edge undirected graph GG is a spanning tree TT of GG which approximately preserves (i.e., up to a multiplicative stretch factor σ\sigma) distances in GG. Tree spanners with provably good stretch factors find applications in communication networks, distributed systems, and network design. However, finding an optimal or even a good tree spanner is a very hard computational task. Thus, if one has to face a transient edge failure in TT, the overall effort that has to be afforded to rebuild a new tree spanner (i.e., computational costs, set-up of new links, updating of the routing tables, etc.) can be rather prohibitive. To circumvent this drawback, an effective alternative is that of associating with each tree edge a best possible (in terms of resulting stretch) swap edge -- a well-established approach in the literature for several other tree topologies. Correspondingly, the problem of computing all the best swap edges of a tree spanner is a challenging algorithmic problem, since solving it efficiently means to exploit the structure of shortest paths not only in GG, but also in all the scenarios in which an edge of TT has failed. For this problem we provide a very efficient solution, running in O(n2log4n)O(n^2 \log^4 n) time, which drastically improves (almost by a quadratic factor in nn in dense graphs!) on the previous known best result.
In this letter we report the singlet ground state structure of the full carotenoid peridinin by means of variational Monte Carlo (VMC) calculations. The VMC relaxed geometry has an average bond length alternation of 0.1165(10) {\AA}, larger than the values obtained by DFT (PBE, B3LYP and CAM-B3LYP) and shorter than that calculated at the Hartree-Fock (HF) level. TDDFT and EOM-CCSD calculations on a reduced peridinin model confirm the HOMO-LUMO major contribution of the Bu+-like (S2) bright excited state. Many Body Green's Function Theory (MBGFT) calculations of the vertical excitation energy of the Bu+-like state for the VMC structure (VMC/MBGFT) provide excitation energy of 2.62 eV, in agreement with experimental results in n-hexane (2.72 eV). The dependence of the excitation energy on the bond length alternation in the MBGFT and TDDFT calculations with different functionals is discussed.
We propose an ODE-based derivation for a generalized class of opinion formation models either for single and multiple species (followers, leaders, trolls). The approach is purely deterministic and the evolution of the single opinion is determined by the competition between two mechanisms: the opinion diffusion and the compromise process. Such deterministic approach allows to recover in the limit an aggregation/(nonlinear)diffusion system of PDEs for the macroscopic opinion densities.
In this paper, we present the first measurement of a Gallium Arsenide crystal working as low-temperature calorimeter for direct Dark Matter (DM) searches within the DAREDEVIL (DARk-mattEr DEVIces for Low energy detection) project. In the quest for direct dark matter detection, innovative approaches to lower the detection thershold and explore the sub-GeV mass DM range, have gained high relevance in the last decade. This study presents the pioneering use of Gallium Arsenide (GaAs) as a low-temperature calorimeter for probing the dark matter-electron interaction channel. Our experimental setup features a GaAs crystal at ultralow temperature of 15 mK, coupled with a Neutron Transmutation Doped (NTD) thermal sensor for precise energy estimation. This configuration is the first step towards the detection of single electrons scattered by dark matter particles within the GaAs crystal, to significantly improve the sensitivity to low-mass dark matter candidates.
The prediction of wind speed is very important when dealing with the production of energy through wind turbines. In this paper, we show a new nonparametric model, based on semi-Markov chains, to predict wind speed. Particularly we use an indexed semi-Markov model that has been shown to be able to reproduce accurately the statistical behavior of wind speed. The model is used to forecast, one step ahead, wind speed. In order to check the validity of the model we show, as indicator of goodness, the root mean square error and mean absolute error between real data and predicted ones. We also compare our forecasting results with those of a persistence model. At last, we show an application of the model to predict financial indicators like the Internal Rate of Return, Duration and Convexity.
The intersection of Quantum Chemistry and Quantum Computing has led to significant advancements in understanding the potential of using quantum devices for the efficient calculation of molecular energies. Simultaneously, this intersection is enhancing the comprehension of quantum chemical properties through the use of quantum computing and quantum information tools. This paper tackles a key question in this relationship: Is the nature of the orbital-wise electron correlations in wavefunctions of realistic prototypical cases classical or quantum? We delve into this inquiry with a comprehensive examination of molecular wavefunctions using Shannon and von Neumann entropies, alongside classical and quantum information theory. Our analysis reveals a notable distinction between classical and quantum mutual information in molecular systems when analyzed with Hartree-Fock canonical orbitals. However, this difference decreases dramatically, by approximately 100-fold, when Natural Orbitals are used as reference. This finding suggests that wavefunction correlations, when viewed through the appropriate orbital basis, are predominantly classical. This insight indicates that computational tasks in quantum chemistry could be significantly simplified by employing Natural Orbitals. Consequently, our study underscores the importance of using Natural Orbitals to accurately assess molecular wavefunction correlations and to avoid their overestimation. In summary, our results suggest a promising path for computational simplification in quantum chemistry, advocating for the wider adoption of Natural Orbitals and raising questions about the actual computational complexity of the multi-body problem in quantum chemistry.
ETH Zurich logoETH ZurichCNRS logoCNRSUniversity of Amsterdam logoUniversity of AmsterdamMonash University logoMonash UniversityChinese Academy of Sciences logoChinese Academy of SciencesUniversity of Chicago logoUniversity of ChicagoUniversity of Oxford logoUniversity of OxfordNagoya University logoNagoya UniversityKyoto University logoKyoto UniversityStanford University logoStanford UniversityINFN logoINFNRIKEN logoRIKENCSICIEECUniversidade de LisboaColumbia University logoColumbia UniversityUniversity of InnsbruckUniversity of Minnesota logoUniversity of MinnesotaUniversidad Autónoma de MadridUniversity of Tokyo logoUniversity of TokyoUniversité Paris-Saclay logoUniversité Paris-SaclayKing’s College London logoKing’s College LondonUniversité de GenèveSorbonne Université logoSorbonne UniversitéUniversity of IowaUniversity of TurkuDeutsches Elektronen-Synchrotron DESYCEA logoCEADublin City UniversityUniversitat de BarcelonaUniversidade Federal do ABCUniversidade Federal do Rio Grande do SulUniversity of LeicesterUniversität WürzburgLudwig-Maximilians-Universität MünchenUniversity of DelawareUniversidad Complutense de MadridChalmers University of Technology logoChalmers University of TechnologyObservatoire de ParisHiroshima UniversityDurham University logoDurham UniversityInstituto de Astrofísica e Ciências do EspaçoINAFUniversity of BathHebrew UniversityUniversidad Nacional Autónoma de MéxicoJagiellonian UniversityUniversidade Federal do Rio Grande do NorteUniversität SiegenInstituto de Astrofísica de CanariasInstitute of Physics of the Czech Academy of SciencesGran Sasso Science Institute (GSSI)University of the WitwatersrandUniversidade de São PauloUniversität HamburgPolish Academy of SciencesRuđer Bošković InstituteRuhr-Universität BochumIbaraki UniversityUniversity of AdelaideUniversitat Autònoma de BarcelonaPalacký University OlomoucAstronomical Institute of the Czech Academy of SciencesNicolaus Copernicus Astronomical CenterYerevan State UniversityUniversidad Andrés BelloShanghai Astronomical ObservatoryTechnische Universität DortmundPSL Research UniversityJosip Juraj Strossmayer University of OsijekUniversity of California, Santa Cruz logoUniversity of California, Santa CruzKonan UniversityUniversity of the Free StateMax-Planck-Institut für PhysikUniversità degli Studi dell’AquilaTaras Shevchenko National University of KyivUniversidad Adolfo IbáñezYamagata UniversityNational Centre for Nuclear ResearchCIEMATInstituto de Astrofísica de Andalucía, IAA-CSICRadboud University NijmegenUniversidad de JaénUniversitat Politécnica de ValénciaInstitute of Nuclear Physics, Polish Academy of SciencesMax-Planck-Institut für KernphysikSanta Cruz Institute for Particle PhysicsOnsala Space ObservatoryUniversitá degli Studi dell’InsubriaUniversidad de MálagaUniversity of SofiaHumboldt UniversityKavli Institute for Particle Astrophysics and CosmologyUniversidad Politécnica de CartagenaAnton Pannekoek Institute for AstronomyUniversità degli Studi di MessinaLviv Polytechnic National UniversityUniversità degli Studi di Bari Aldo MoroFinnish Centre for Astronomy with ESO (FINCA)Institut de Ciències del Cosmos (ICCUB)Institut de Recherche en Astrophysique et Planétologie (IRAP)University of LódzInstitute for Nuclear Research and Nuclear EnergyUniversité Bordeaux MontaigneInstitut de Ciències de l’Espai (ICE)Centre d'Etudes Nucléaires de Bordeaux GradignanUniversit di Catania* North–West UniversityUniversit de ParisUniversit degli Studi di PerugiaUniversit Savoie Mont BlancUniversit Paul Sabatier`Ecole PolytechniqueUniversit de MontpellierUniversit degli Studi di TorinoCenter for Astrophysics  Harvard & SmithsonianUniversit di Roma Tor VergataUniversit degli Studi di UdineUniversit degli Studi di Trieste
The Cherenkov Telescope Array (CTA), the new-generation ground-based observatory for γ\gamma-ray astronomy, provides unique capabilities to address significant open questions in astrophysics, cosmology, and fundamental physics. We study some of the salient areas of γ\gamma-ray cosmology that can be explored as part of the Key Science Projects of CTA, through simulated observations of active galactic nuclei (AGN) and of their relativistic jets. Observations of AGN with CTA will enable a measurement of γ\gamma-ray absorption on the extragalactic background light with a statistical uncertainty below 15% up to a redshift z=2z=2 and to constrain or detect γ\gamma-ray halos up to intergalactic-magnetic-field strengths of at least 0.3pG. Extragalactic observations with CTA also show promising potential to probe physics beyond the Standard Model. The best limits on Lorentz invariance violation from γ\gamma-ray astronomy will be improved by a factor of at least two to three. CTA will also probe the parameter space in which axion-like particles could constitute a significant fraction, if not all, of dark matter. We conclude on the synergies between CTA and other upcoming facilities that will foster the growth of γ\gamma-ray cosmology.
Recommender systems for software engineering (RSSEs) assist software engineers in dealing with a growing information overload when discerning alternative development solutions. While RSSEs are becoming more and more effective in suggesting handy recommendations, they tend to suffer from popularity bias, i.e., favoring items that are relevant mainly because several developers are using them. While this rewards artifacts that are likely more reliable and well-documented, it would also mean that missing artifacts are rarely used because they are very specific or more recent. This paper studies popularity bias in Third-Party Library (TPL) RSSEs. First, we investigate whether state-of-the-art research in RSSEs has already tackled the issue of popularity bias. Then, we quantitatively assess four existing TPL RSSEs, exploring their capability to deal with the recommendation of popular items. Finally, we propose a mechanism to defuse popularity bias in the recommendation list. The empirical study reveals that the issue of dealing with popularity in TPL RSSEs has not received adequate attention from the software engineering community. Among the surveyed work, only one starts investigating the issue, albeit getting a low prediction performance.
This editorial presents the various forms of open access, discusses their pros and cons from the perspective of the Journal of Object Technology and its editors in chiefs, and illustrates how JOT implements a platinum open access model. The regular reader will also notice that this editorial features a new template for the journal that will be used from now on.
There are no more papers matching your filters at the moment.