Italian National Research Council
Accurately tracking the global distribution and evolution of precipitation is essential for both research and operational meteorology. Satellite observations remain the only means of achieving consistent, global-scale precipitation monitoring. While machine learning has long been applied to satellite-based precipitation retrieval, the absence of a standardized benchmark dataset has hindered fair comparisons between methods and limited progress in algorithm development. To address this gap, the International Precipitation Working Group has developed SatRain, the first AI-ready benchmark dataset for satellite-based detection and estimation of rain, snow, graupel, and hail. SatRain includes multi-sensor satellite observations representative of the major platforms currently used in precipitation remote sensing, paired with high-quality reference estimates from ground-based radars corrected using rain gauge measurements. It offers a standardized evaluation protocol to enable robust and reproducible comparisons across machine learning approaches. In addition to supporting algorithm evaluation, the diversity of sensors and inclusion of time-resolved geostationary observations make SatRain a valuable foundation for developing next-generation AI models to deliver more accurate, detailed, and globally consistent precipitation estimates.
ETH Zurich logoETH ZurichUniversity of Washington logoUniversity of WashingtonCNRS logoCNRSUniversity of Pittsburgh logoUniversity of PittsburghUniversity of Cambridge logoUniversity of CambridgeUniversity of FreiburgHeidelberg UniversityLeibniz University HannoverNortheastern University logoNortheastern UniversityUCLA logoUCLAImperial College London logoImperial College LondonUniversity of Manchester logoUniversity of ManchesterUniversity of ZurichNew York University logoNew York UniversityUniversity of BernUniversity of StuttgartUC Berkeley logoUC BerkeleyUniversity College London logoUniversity College LondonFudan University logoFudan UniversityGeorgia Institute of Technology logoGeorgia Institute of TechnologyNational Taiwan Universitythe University of Tokyo logothe University of TokyoUniversity of California, Irvine logoUniversity of California, IrvineUniversity of BonnTechnical University of BerlinUniversity of Bristol logoUniversity of BristolUniversity of Michigan logoUniversity of MichiganUniversity of EdinburghUniversity of Hong KongUniversity of Alabama at BirminghamNorthwestern University logoNorthwestern UniversityUniversity of BambergUniversity of Florida logoUniversity of FloridaEmory University logoEmory UniversityUniversity of CologneHarvard Medical SchoolUniversity of Pennsylvania logoUniversity of PennsylvaniaUniversity of Southampton logoUniversity of SouthamptonFlorida State UniversityEPFL logoEPFLUniversity of Wisconsin-Madison logoUniversity of Wisconsin-MadisonMassachusetts General HospitalChongqing UniversityKeio UniversityUniversity of Alberta logoUniversity of AlbertaKing’s College London logoKing’s College LondonFriedrich-Alexander-Universität Erlangen-NürnbergUniversity of LuxembourgTechnical University of Munich logoTechnical University of MunichUniversity of Duisburg-EssenSapienza University of RomeUniversity of HeidelbergUniversity of SheffieldHKUST logoHKUSTUniversity of GenevaWashington University in St. LouisTU BerlinUniversity of GlasgowUniversity of SiegenUniversity of PotsdamUniversidade Estadual de CampinasUniversity of OldenburgThe Ohio State University logoThe Ohio State UniversityUniversity of LeicesterGerman Cancer Research Center (DKFZ)University of BremenUniversity of ToulouseUniversity of MiamiKarlsruhe Institute of Technology logoKarlsruhe Institute of TechnologyPeking Union Medical CollegeUniversity of OuluUniversity of HamburgUniversity of RegensburgUniversity of BirminghamUniversity of LeedsChinese Academy of Medical SciencesINSERMUniversity of Basel logoUniversity of BaselPeking Union Medical College HospitalUniversity of LausanneUniversity of LilleUniversity of PoitiersUniversity of PassauUniversity of LübeckKing Fahd University of Petroleum and MineralsUniversity of LondonUniversity of NottinghamUniversity of Erlangen-NurembergUniversity of BielefeldSorbonne UniversityUniversity of South FloridaWake Forest UniversityUniversity of CalgaryUniversity of Picardie Jules VerneIBMUniversity of Göttingen logoUniversity of GöttingenUniversity of BordeauxUniversity of MannheimUniversity of California San FranciscoNIHUniversity of KonstanzUniversity of Electro-CommunicationsUniversity of WuppertalUniversity of ReunionUNICAMPUniversity of TrierHasso Plattner InstituteUniversity of BayreuthHeidelberg University HospitalUniversity of StrasbourgDKFZUniversity of LorraineInselspital, Bern University Hospital, University of BernUniversity of WürzburgUniversity of La RochelleUniversity of LyonUniversity of HohenheimUniversity Medical Center Hamburg-EppendorfUniversity of UlmUniversity Hospital ZurichUniversity of TuebingenUniversity of KaiserslauternUniversity of NantesUniversity of MainzUniversity of PaderbornUniversity of KielMedical University of South CarolinaUniversity of RostockThe University of Texas MD Anderson Cancer CenterNational Research Council (CNR)Hannover Medical SchoolItalian National Research CouncilUniversity of MuensterUniversity of MontpellierUniversity of LeipzigUniversity of GreifswaldUniversity Hospital BernSiemens HealthineersThe University of Alabama at BirminghamNational Institutes of HealthUniversity of MarburgUniversity of Paris-SaclayUniversity of LimogesUniversity of Clermont AuvergneUniversity of DortmundUniversity of GiessenKITUniversity of ToulonChildren’s Hospital of PhiladelphiaUniversity of JenaNational Taiwan University HospitalUniversity of SaarlandUniversity of ErlangenNational Cancer InstituteUniversity Hospital HeidelbergSwiss Federal Institute of Technology LausanneUniversity of Texas Health Science Center at HoustonNational Institute of Biomedical Imaging and BioengineeringUniversity of New CaledoniaUniversity of Koblenz-LandauParis Diderot UniversityUniversity of ParisInselspital, Bern University HospitalUniversity of Grenoble AlpesUniversity Hospital BaselMD Anderson Cancer CenterUniversity of AngersUniversity of French PolynesiaUniversity of MagdeburgUniversity of Geneva, SwitzerlandOulu University HospitalUniversity of ToursFriedrich-Alexander-University Erlangen-NurnbergUniversity of Rennes 1Wake Forest School of MedicineNIH Clinical CenterParis Descartes UniversityUniversity of Rouen NormandieUniversity of Aix-MarseilleUniversity of Perpignan Via DomitiaUniversity of Caen NormandieUniversity of FrankfurtUniversity of BochumUniversity of Bourgogne-Franche-ComtéUniversity of Corsica Pasquale PaoliNational Institute of Neurological Disorders and StrokeUniversity of HannoverRoche DiagnosticsUniversity of South BrittanyUniversity of DüsseldorfUniversity of Reims Champagne-ArdenneUniversity of HalleIRCCS Fondazione Santa LuciaUniversity of Applied Sciences TrierUniversity of Southampton, UKUniversity of Nice–Sophia AntipolisUniversit de LorraineUniversité Paris-Saclay["École Polytechnique Fédérale de Lausanne"]RWTH Aachen UniversityUniversity of Bern, Institute for Advanced Study in Biomedical InnovationCRIBIS University of AlbertaThe Cancer Imaging Archive (TCIA)Fraunhofer Institute for Medical Image Computing MEVISMedical School of HannoverIstituto di Ricovero e Cura a Carattere Scientifico NeuromedFondazione Santa Lucia IRCCSCEA, LIST, Laboratory of Image and Biomedical SystemsUniversity of Alberta, CanadaHeidelberg University Hospital, Department of NeuroradiologyUniversity of Bern, SwitzerlandUniversity of DresdenUniversity of SpeyerUniversity of Trier, GermanyUniversity of Lorraine, FranceUniversity of Le Havre NormandieUniversity of Bretagne OccidentaleUniversity of French GuianaUniversity of the AntillesUniversity of Bern, Institute of Surgical Technology and BiomechanicsUniversity of Bern, ARTORG Center for Biomedical Engineering ResearchUniversity of Geneva, Department of RadiologyUniversity of Zürich, Department of NeuroradiologyRuhr-University-Bochum
·
Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.
The application of Riemannian geometry in the decoding of brain-computer interfaces (BCIs) has swiftly garnered attention because of its straightforwardness, precision, and resilience, along with its aptitude for transfer learning, which has been demonstrated through significant achievements in global BCI competitions. This paper presents a comprehensive review of recent advancements in the integration of deep learning with Riemannian geometry to enhance EEG signal decoding in BCIs. Our review updates the findings since the last major review in 2017, comparing modern approaches that utilize deep learning to improve the handling of non-Euclidean data structures inherent in EEG signals. We discuss how these approaches not only tackle the traditional challenges of noise sensitivity, non-stationarity, and lengthy calibration times but also introduce novel classification frameworks and signal processing techniques to reduce these limitations significantly. Furthermore, we identify current shortcomings and propose future research directions in manifold learning and riemannian-based classification, focusing on practical implementations and theoretical expansions, such as feature tracking on manifolds, multitask learning, feature extraction, and transfer learning. This review aims to bridge the gap between theoretical research and practical, real-world applications, making sophisticated mathematical approaches accessible and actionable for BCI enhancements.
We investigated the use of virtual technologies to digitise and enhance cultural heritage (CH), aligning with Open Science and FAIR principles. Through case studies in museums, we developed reproducible workflows, 3D models, and tools fostering accessibility, inclusivity, and sustainability of CH. Applications include interdisciplinary research, educational innovation, and CH preservation.
Many algorithms have been proposed in the last ten years for the discovery of dynamic communities. However, these methods are seldom compared between themselves. In this article, we propose a generator of dynamic graphs with planted evolving community structure, as a benchmark to compare and evaluate such algorithms. Unlike previously proposed benchmarks, it is able to specify any desired evolving community structure through a descriptive language, and then to generate the corresponding progressively evolving network. We empirically evaluate six existing algorithms for dynamic community detection in terms of instantaneous and longitudinal similarity with the planted ground truth, smoothness of dynamic partitions, and scalability. We notably observe different types of weaknesses depending on their approach to ensure smoothness, namely Glitches, Oversimplification and Identity loss. Although no method arises as a clear winner, we observe clear differences between methods, and we identified the fastest, those yielding the most smoothed or the most accurate solutions at each step.
Prostate cancer is the most common cancer among US men. However, prostate imaging is still challenging despite the advances in multi-parametric Magnetic Resonance Imaging (MRI), which provides both morphologic and functional information pertaining to the pathological regions. Along with whole prostate gland segmentation, distinguishing between the Central Gland (CG) and Peripheral Zone (PZ) can guide towards differential diagnosis, since the frequency and severity of tumors differ in these regions; however, their boundary is often weak and fuzzy. This work presents a preliminary study on Deep Learning to automatically delineate the CG and PZ, aiming at evaluating the generalization ability of Convolutional Neural Networks (CNNs) on two multi-centric MRI prostate datasets. Especially, we compared three CNN-based architectures: SegNet, U-Net, and pix2pix. In such a context, the segmentation performances achieved with/without pre-training were compared in 4-fold cross-validation. In general, U-Net outperforms the other methods, especially when training and testing are performed on multiple datasets.
Although convolutional neural networks (CNNs) showed remarkable results in many vision tasks, they are still strained by simple yet challenging visual reasoning problems. Inspired by the recent success of the Transformer network in computer vision, in this paper, we introduce the Recurrent Vision Transformer (RViT) model. Thanks to the impact of recurrent connections and spatial attention in reasoning tasks, this network achieves competitive results on the same-different visual reasoning problems from the SVRT dataset. The weight-sharing both in spatial and depth dimensions regularizes the model, allowing it to learn using far fewer free parameters, using only 28k training samples. A comprehensive ablation study confirms the importance of a hybrid CNN + Transformer architecture and the role of the feedback connections, which iteratively refine the internal representation until a stable prediction is obtained. In the end, this study can lay the basis for a deeper understanding of the role of attention and recurrent connections for solving visual abstract reasoning tasks.
1
In this study, we propose extension of fuzzy c-means (FCM) clustering in multi-view environments. First, we introduce an exponential multi-view FCM (E-MVFCM). E-MVFCM is a centralized MVC with consideration to heat-kernel coefficients (H-KC) and weight factors. Secondly, we propose an exponential bi-level multi-view fuzzy c-means clustering (EB-MVFCM). Different to E-MVFCM, EB-MVFCM does automatic computation of feature and weight factors simultaneously. Like E-MVFCM, EB-MVFCM present explicit forms of the H-KC to simplify the generation of the heat-kernel K(t)\mathcal{K}(t) in powers of the proper time tt during the clustering process. All the features used in this study, including tools and functions of proposed algorithms will be made available at this https URL
This paper investigates the universal approximation capabilities of Hamiltonian Deep Neural Networks (HDNNs) that arise from the discretization of Hamiltonian Neural Ordinary Differential Equations. Recently, it has been shown that HDNNs enjoy, by design, non-vanishing gradients, which provide numerical stability during training. However, although HDNNs have demonstrated state-of-the-art performance in several applications, a comprehensive study to quantify their expressivity is missing. In this regard, we provide a universal approximation theorem for HDNNs and prove that a portion of the flow of HDNNs can approximate arbitrary well any continuous function over a compact domain. This result provides a solid theoretical foundation for the practical use of HDNNs.
Clustering of nodes in Bayesian Networks (BNs) and related graphical models such as Dynamic BNs (DBNs) has been demonstrated to enhance computational efficiency and improve model learning. Typically, it involves the partitioning of the underlying Directed Acyclic Graph (DAG) into cliques, or optimising for some cost or criteria. Computational cost is important since BN and DBN inference, such as estimating marginal distributions given evidence or updating model parameters, is NP-hard. The challenge is exacerbated by cost dependency, where inference outcomes and hence clustering cost depends on both nodes within a cluster and the mapping of clusters that are connected by at least one arc. We propose an algorithm called Dependent Cluster MAPping (DCMAP) which is shown analytically, given an arbitrarily defined, positive cost function, to find all optimal cluster mappings, and do so with no more iterations than an equally informed algorithm. DCMAP is demonstrated on a complex systems seagrass DBN, which has 25 nodes per time-slice, and captures biological, ecological and environmental dynamics and their interactions to predict the impact of dredging stressors on resilience and their cumulative effects over time. The algorithm is employed to find clusters to optimise the computational efficiency of inferring marginal distributions given evidence. For the 25 (one time-slice) and 50-node (two time-slices) DBN, the search space size was 9.91×1099.91\times10^9 and 1.51×10211.51\times10^{21} possible cluster mappings, respectively, but the first optimal solution was found at iteration number 856 (95\% CI 852,866), and 1569 (1566,1581) with a cost that was 4\% and 0.2\% of the naive heuristic cost, respectively. Through optimal clustering, DCMAP opens up opportunities for further research beyond improving computational efficiency, such as using clustering to minimise entropy in BN learning.
In situ interfacial rheology and numerical simulations are used to investigate microgel monolayers in a wide range of packing fractions, ζ2D\zeta_{2D}. The heterogeneous particle compressibility determines two flow regimes characterized by distinct master curves. To mimic the microgel architecture and reproduce experiments, an interaction potential combining a soft shoulder with the Hertzian model is introduced. In contrast to bulk conditions, the elastic moduli vary non-monotonically with ζ2D\zeta_{2D} at the interface, confirming long-sought predictions of reentrant behavior for Hertzian-like systems.
In superconductivity, a surge of interests in enhancing TcT_{\rm c} is ever mounting, where a recent focus is toward multi-band superconductivity. In TcT_{\rm c} enhancements specific to two-band cases, especially around the Bardeen-Cooper-Schrieffer (BCS) to Bose-Einstein condensate (BEC) crossover considered here, we have to be careful about how quantum fluctuations affect the many-body states, i.e., particle-hole fluctuations suppressing the pairing for attractive interactions. Here we explore how to circumvent the suppression by examining multichannel pairing interactions in two-band systems. With the Gor'kov-Melik-Barkhudarov (GMB) formalism for particle-hole fluctuations in a continuous space, we look into the case of a deep dispersive band accompanied by an incipient heavy-mass (i.e., quasi-flat) band. We find that, while the GMB corrections usually suppress TcT_{\rm c} significantly, this in fact competes with the enhanced pairing arising from the heavy band, with the trade-off leading to a peaked structure in TcT_{\rm c} against the band-mass ratio when the heavy band is incipient. The system then plunges into a strong-coupling regime with the GMB screening vastly suppressed. This occurs prominently when the chemical potential approaches the bound state lurking just below the heavy band, which can be viewed as a Fano-Feshbach resonance, with its width governed by the pair-exchange interaction. The diagrammatic structure comprising particle-particle and particle-hole channels is heavily entangled, so that the emergent Fano-Feshbach resonance dominates all the channels, suggesting a universal feature in multiband superconductivity.
Generative Bayesian Computation (GBC) methods are developed to provide an efficient computational solution for maximum expected utility (MEU). We propose a density-free generative method based on quantiles that naturally calculates expected utility as a marginal of quantiles. Our approach uses a deep quantile neural estimator to directly estimate distributional utilities. Generative methods assume only the ability to simulate from the model and parameters and as such are likelihood-free. A large training dataset is generated from parameters and output together with a base distribution. Our method a number of computational advantages primarily being density-free with an efficient estimator of expected utility. A link with the dual theory of expected utility and risk taking is also discussed. To illustrate our methodology, we solve an optimal portfolio allocation problem with Bayesian learning and a power utility (a.k.a. fractional Kelly criterion). Finally, we conclude with directions for future research.
Considering device-to-device (D2D) wireless links as a virtual extension of 5G (and beyond) cellular networks to deliver popular contents has been proposed as an interesting approach to reduce energy consumption, congestion, and bandwidth usage at the network edge. In the scenario of multiple users in a region independently requesting some popular content, there is a major potential for energy consumption reduction exploiting D2D communications. In this scenario, we consider the problem of selecting the maximum allowed transmission range (or equivalently the maximum transmit power) for the D2D links that support the content delivery process. We show that, for a given maximum allowed D2D energy consumption, a considerable reduction of the cellular infrastructure energy consumption can be achieved by selecting the maximum D2D transmission range as a function of content class parameters such as popularity and delay-tolerance, compared to a uniform selection across different content classes. Specifically, we provide an analytical model that can be used to estimate the energy consumption (for small delay tolerance) and thus to set the optimal transmission range. We validate the model via simulations and study the energy gain that our approach allows to obtain. Our results show that the proposed approach to the maximum D2D transmission range selection allows a reduction of the overall energy consumption in the range of 30% to 55%, compared to a selection of the maximum D2D transmission range oblivious to popularity and delay tolerance.
The current development trend of wireless communications aims at coping with the very stringent reliability and latency requirements posed by several emerging Internet of Things (IoT) application scenarios. Since the problem of realizing Ultra Reliable Low-Latency Communications (URLLC) is becoming more and more important, it has attracted the attention of researchers, and new efficient resource allocation algorithms are necessary. In this paper, we consider a challenging scenario where the available spectrum might be fragmented across non-adjacent portions of the band, and channels are differently affected by interference coming from surrounding networks. Furthermore, Channel State Information (CSI) is assumed to be unavailable, thus requiring an allocation of resources based only on topology information and channel statistics. To address this challenge in a dense smart factory scenario where devices periodically transmit their data to a common receiver, we present a novel resource allocation methodology based on a graph-theoretical approach originally designed to allocate mobility resources in on-demand, shared transportation. The proposed methodology is compared with two benchmark allocation strategies, showing its ability of increasing spectral efficiency of as much as 50% with respect to the best performing benchmark. Contrary to what happens in many resource allocation settings, this increase in spectrum efficiency does not come at the expense of fairness, which is also increased as compared to benchmark algorithms.
The challenging applications envisioned for the future Internet of Things networks are making it urgent to develop fast and scalable resource allocation algorithms able to meet the stringent reliability and latency constraints typical of the Ultra Reliable, Low Latency Communications (URLLC). However, there is an inherent tradeoff between complexity and performance to be addressed: sophisticated resource allocation methods providing optimized spectrum utilization are challenged by the scale of applications and the concomitant stringent latency constraints. Whether non-trivial resource allocation approaches can be successfully applied in large-scale network instances is still an open question that this paper aims to address. More specifically, we consider a scenario in which Channel State Information (CSI) is used to improve spectrum allocation in a radio environment that experiences channel time correlation. Channel correlation allows the usage of CSI for longer time before an update, thus lowering the overhead burden. Following this intuition, we propose a dynamic pilot transmission allocation scheme in order to adaptively tune the CSI age. We systematically analyze the improvement of this approach applied to a sophisticated, recently introduced graph-based resource allocation method that we extend here to account for CSI. The results show that, even in very dense networks and accounting for the higher computational time of the graph-based approach, this algorithm is able to improve spectrum efficiency by over 12% as compared to a greedy heuristic, and that dynamic pilot transmissions allocation can further boost its performance in terms of fairness, while concomitantly further increase spectrum efficiency of 3-5%. \
The increasing traffic demand in cellular networks has recently led to the investigation of new strategies to save precious resources like spectrum and energy. Direct device-to-device (D2D) communication becomes a promising solution if the two terminals are located in close proximity. In this case, the D2D communications should coexist with cellular transmissions, so they must be carefully scheduled in order to avoid harmful interference impacts. In this paper, we outline a novel framework encompassing channel allocation, mode selection and power control for D2D communications. Power allocation is done in a distributed and cognitive fashion at the beginning of each time slot, based on local information, while channel/mode selection is performed in a centralized manner only at the beginning of an epoch, a time interval including a series of subsequent time slots. This hybrid approach guarantees an effective tradeoff between overhead and adaptivity. We analyze in depth the distributed power allocation mechanism, and we state a theorem which allows to derive the optimal power allocation strategy and to compute the corresponding throughput. Extensive simulations confirm the benefits granted by our approach, when compared with state-of-the-art distributed schemes, in terms of throughput and fairness.
The ability of a honeybee swarm to select the best nest site plays a fundamental role in determining the future colony's fitness. To date, the nest-site selection process has mostly been modelled and theoretically analysed for the case of binary decisions. However, when the number of alternative nests is larger than two, the decision process dynamics qualitatively change. In this work, we extend previous analyses of a value-sensitive decision-making mechanism to a decision process among N nests. First, we present the decision-making dynamics in the symmetric case of N equal-quality nests. Then, we generalise our findings to a best-of-N decision scenario with one superior nest and N-1 inferior nests, previously studied empirically in bees and ants. Whereas previous binary models highlighted the crucial role of inhibitory stop-signalling, the key parameter in our new analysis is the relative time invested by swarm members in individual discovery and in signalling behaviours. Our new analysis reveals conflicting pressures on this ratio in symmetric and best-of-N decisions, which could be solved through a time-dependent signalling strategy. Additionally, our analysis suggests how ecological factors determining the density of suitable nest sites may have led to selective pressures for an optimal stable signalling ratio.
28 Nov 2025
Physics-informed neural networks (PINNs) often struggle with multi-scale PDEs featuring sharp gradients and nontrivial boundary conditions, as the physics residual and boundary enforcement compete during optimization. We present a dual-network framework that decomposes the solution as u=uD+uBu = u_{\text{D}} + u_{\text{B}}, where uDu_{\text{D}} (domain network) captures interior dynamics and uBu_{\text{B}} (boundary network) handles near-boundary corrections. Both networks share a unified physics residual while being softly specialized via distance-weighted priors (wbd=exp(d/τ)w_{\text{bd}} = \exp(-d/\tau)) that are cosine-annealed during training. Boundary conditions are enforced through an augmented Lagrangian method, eliminating manual penalty tuning. Training proceeds in two phases: Phase~1 uses uniform collocation to establish network roles and stabilize boundary satisfaction; Phase~2 employs focused sampling (e.g. ring sampling near Ω\partial\Omega) with annealed role weights to efficiently resolve localized features. We evaluate our model on four benchmarks, including the 1D Fokker-Planck equation, the Laplace equation, the Poisson equation, and the 1D wave equation. Across Laplace and Poisson benchmarks, our method reduces error by 3690%36-90\%, improves boundary satisfaction by 2188%21-88\%, and decreases MAE by 2.29.3×2.2-9.3\times relative to a single-network PINN. Ablations isolate contributions of (i)~soft boundary-interior specialization, (ii)~annealed role regularization, and (iii)~the two-phase curriculum. The method is simple to implement, adds minimal computational overhead, and broadly applies to PDEs with sharp solutions and complex boundary data.
Ontology Design Patterns (ODPs) have become an established and recognised practice for guaranteeing good quality ontology engineering. There are several ODP repositories where ODPs are shared as well as ontology design methodologies recommending their reuse. Performing rigorous testing is recommended as well for supporting ontology maintenance and validating the resulting resource against its motivating requirements. Nevertheless, it is less than straightforward to find guidelines on how to apply such methodologies for developing domain-specific knowledge graphs. ArCo is the knowledge graph of Italian Cultural Heritage and has been developed by using eXtreme Design (XD), an ODP- and test-driven methodology. During its development, XD has been adapted to the need of the CH domain e.g. gathering requirements from an open, diverse community of consumers, a new ODP has been defined and many have been specialised to address specific CH requirements. This paper presents ArCo and describes how to apply XD to the development and validation of a CH knowledge graph, also detailing the (intellectual) process implemented for matching the encountered modelling problems to ODPs. Relevant contributions also include a novel web tool for supporting unit-testing of knowledge graphs, a rigorous evaluation of ArCo, and a discussion of methodological lessons learned during ArCo development.
There are no more papers matching your filters at the moment.