Norwegian University of Life Sciences
The spectral properties, momentum dispersion, and broadening of bulk plasmonic excitations of 25 elemental metals are studied from first principles calculations in the random-phase approximation. Spectral band structures are constructed from the resulting momentum- and frequency-dependent inverse dielectric function. We develop an effective analytical representation of the main collective excitations in the dielectric response, by extending our earlier model based on multipole-Padé approximants (MPA) to incorporate both momentum and frequency dependence. With this tool, we identify plasmonic quasiparticle dispersions exhibiting complex features, including non-parabolic energy and intensity dispersions, discontinuities due to anisotropy, and overlapping effects that lead to band crossings and anti-crossings. We also find good agreement between computed results and available experiments in the optical limit. The results for elemental metals establish a reference point that can guide both fundamental studies and practical applications in plasmonics and spectroscopy.
This paper presents the initial results from our structured literature review on applications of Formal Methods (FM) to Robotic Autonomous Systems (RAS). We describe our structured survey methodology; including database selection and associated search strings, search filters and collaborative review of identified papers. We categorise and enumerate the FM approaches and formalisms that have been used for specification and verification of RAS. We investigate FM in the context of sub-symbolic AI-enabled RAS and examine the evolution of how FM is used over time in this field. This work complements a pre-existing survey in this area and we examine how this research area has matured over time. Specifically, our survey demonstrates that some trends have persisted as observed in a previous survey. Additionally, it recognized new trends that were not considered previously including a noticeable increase in adopting Formal Synthesis approaches as well as Probabilistic Verification Techniques.
EEG signals capture brain activity with high temporal and low spatial resolution, supporting applications such as neurological diagnosis, cognitive monitoring, and brain-computer interfaces. However, effective analysis is hindered by limited labeled data, high dimensionality, and the absence of scalable models that fully capture spatiotemporal dependencies. Existing self-supervised learning (SSL) methods often focus on either spatial or temporal features, leading to suboptimal representations. To this end, we propose EEG-VJEPA, a novel adaptation of the Video Joint Embedding Predictive Architecture (V-JEPA) for EEG classification. By treating EEG as video-like sequences, EEG-VJEPA learns semantically meaningful spatiotemporal representations using joint embeddings and adaptive masking. To our knowledge, this is the first work that exploits V-JEPA for EEG classification and explores the visual concepts learned by the model. Evaluations on the publicly available Temple University Hospital (TUH) Abnormal EEG dataset show that EEG-VJEPA outperforms existing state-of-the-art models in classification accuracy. Beyond classification accuracy, EEG-VJEPA captures physiologically relevant spatial and temporal signal patterns, offering interpretable embeddings that may support human-AI collaboration in diagnostic workflows. These findings position EEG-VJEPA as a promising framework for scalable, trustworthy EEG analysis in real-world clinical settings.
Low-rank pre-training and fine-tuning have recently emerged as promising techniques for reducing the computational and storage costs of large neural networks. Training low-rank parameterizations typically relies on conventional optimizers such as heavy ball momentum methods or Adam. In this work, we identify and analyze potential difficulties that these training methods encounter when used to train low-rank parameterizations of weights. In particular, we show that classical momentum methods can struggle to converge to a local optimum due to the geometry of the underlying optimization landscape. To address this, we introduce novel training strategies derived from dynamical low-rank approximation, which explicitly account for the underlying geometric structure. Our approach leverages and combines tools from dynamical low-rank approximation and momentum-based optimization to design optimizers that respect the intrinsic geometry of the parameter space. We validate our methods through numerical experiments, demonstrating faster convergence, and stronger validation metrics at given parameter budgets.
Low-Rank Adaptation (LoRA) has become a widely used method for parameter-efficient fine-tuning of large-scale, pre-trained neural networks. However, LoRA and its extensions face several challenges, including the need for rank adaptivity, robustness, and computational efficiency during the fine-tuning process. We introduce GeoLoRA, a novel approach that addresses these limitations by leveraging dynamical low-rank approximation theory. GeoLoRA requires only a single backpropagation pass over the small-rank adapters, significantly reducing computational cost as compared to similar dynamical low-rank training methods and making it faster than popular baselines such as AdaLoRA. This allows GeoLoRA to efficiently adapt the allocated parameter budget across the model, achieving smaller low-rank adapters compared to heuristic methods like AdaLoRA and LoRA, while maintaining critical convergence, descent, and error-bound theoretical guarantees. The resulting method is not only more efficient but also more robust to varying hyperparameter settings. We demonstrate the effectiveness of GeoLoRA on several state-of-the-art benchmarks, showing that it outperforms existing methods in both accuracy and computational efficiency.
01 Dec 2024
Computing the numerical solution to high-dimensional tensor differential equations can lead to prohibitive computational costs and memory requirements. To reduce the memory and computational footprint, dynamical low-rank approximation (DLRA) has proven to be a promising approach. DLRA represents the solution as a low-rank tensor factorization and evolves the resulting low-rank factors in time. A central challenge in DLRA is to find time integration schemes that are robust to the arising small singular values. A robust parallel basis update & Galerkin integrator, which simultaneously evolves all low-rank factors, has recently been derived for matrix differential equations. This work extends the parallel low-rank matrix integrator to Tucker tensors and general tree tensor networks, yielding an algorithm in which all bases and connecting tensors are evolved in parallel over a time step. We formulate the algorithm, provide a robust error bound, and demonstrate the efficiency of the new integrators for problems in quantum many-body physics, uncertainty quantification, and radiative transfer.
Human Activity Recognition (HAR) has become one of the leading research topics of the last decade. As sensing technologies have matured and their economic costs have declined, a host of novel applications, e.g., in healthcare, industry, sports, and daily life activities have become popular. The design of HAR systems requires different time-consuming processing steps, such as data collection, annotation, and model training and optimization. In particular, data annotation represents the most labor-intensive and cumbersome step in HAR, since it requires extensive and detailed manual work from human annotators. Therefore, different methodologies concerning the automation of the annotation procedure in HAR have been proposed. The annotation problem occurs in different notions and scenarios, which all require individual solutions. In this paper, we provide the first systematic review on data annotation techniques for HAR. By grouping existing approaches into classes and providing a taxonomy, our goal is to support the decision on which techniques can be beneficially used in a given scenario.
We propose and analyse a procedure for using a standard activity-based neuron network model and firing data to compute the effective connection strengths between neurons in a network. We assume a Heaviside response function, that the external inputs are given and that the initial state of the neural activity is known. The associated forward operator for this problem, which maps given connection strengths to the time intervals of firing, is highly nonlinear. Nevertheless, it turns out that the inverse problem of determining the connection strengths can be solved in a rather transparent manner, only employing standard mathematical tools. In fact, it is sufficient to solve a system of decoupled ODEs, which yields a linear system of algebraic equations for determining the connection strengths. The nature of the inverse problem is investigated by studying some mathematical properties of the aforementioned linear system and by a series of numerical experiments. Finally, under an assumption preventing the effective contribution of the network to each neuron from staying at zero, we prove that the involved forward operator is continuous. Sufficient criteria on the external input ensuring that the needed assumption holds are also provided.
Excitable tissue is fundamental to brain function, yet its study is complicated by extreme morphological complexity and the physiological processes governing its dynamics. Consequently, detailed computational modeling of this tissue represents a formidable task, requiring both efficient numerical methods and robust implementations. Meanwhile, efficient and robust methods for image segmentation and meshing are needed to provide realistic geometries for which numerical solutions are tractable. Here, we present a computational framework that models electrodiffusion in excitable cerebral tissue, together with realistic geometries generated from electron microscopy data. To demonstrate a possible application of the framework, we simulate electrodiffusive dynamics in cerebral tissue during neuronal activity. Our results and findings highlight the numerical and computational challenges associated with modeling and simulation of electrodiffusion and other multiphysics in dense reconstructions of cerebral tissue.
This paper presents a novel Inter Catchment Wastewater Transfer (ICWT) method for mitigating sewer overflow. The ICWT aims at balancing the spatial mismatch of sewer flow and treatment capacity of Wastewater Treatment Plant (WWTP), through collaborative operation of sewer system facilities. Using a hydraulic model, the effectiveness of ICWT is investigated in a sewer system in Drammen, Norway. Concerning the whole system performance, we found that the Søren Lemmich pump station plays a vital role in the ICWT framework. To enhance the operation of this pump station, it is imperative to construct a multi-step ahead water level prediction model. Hence, one of the most promising artificial intelligence techniques, Long Short Term Memory (LSTM), is employed to undertake this task. Experiments demonstrated that LSTM is superior to Gated Recurrent Unit (GRU), Recurrent Neural Network (RNN), Feed-forward Neural Network (FFNN) and Support Vector Regression (SVR).
Paleoclimate proxy records from Greenland ice cores, archiving e.g. δ18\delta^{18}O as a proxy for surface temperature, show that sudden climatic shifts called Dansgaard-Oeschger events (DO) occurred repeatedly during the last glacial interval. They comprised substantial warming of the Arctic region from cold to milder conditions. Concomitant abrupt changes in the dust concentrations of the same ice cores suggest that sudden reorganisations of the hemispheric-scale atmospheric circulation have accompanied the warming events. Genuine bistability of the North Atlantic climate system is commonly hypothesised to explain the existence of stadial (cold) and interstadial (milder) periods in Greenland. However, the physical mechanisms that drove abrupt transitions from the stadial to the interstadial state, and more gradual yet still abrupt reverse transitions, remain debated. Here, we conduct a one-dimensional data-driven analysis of the Greenland temperature and atmospheric circulation proxies under the purview of stochastic processes. We take the Kramers-Moyal equation to estimate each proxy's drift and diffusion terms within a Markovian model framework. We then assess noise contributions beyond Gaussian white noise. The resulting stochastic differential equation (SDE) models feature a monostable drift for the Greenland temperature proxy and a bistable one for the atmospheric circulation proxy. Indicators of discontinuity in stochastic processes suggest to include higher-order terms of the Kramers-Moyal equation when modelling the Greenland temperature proxy's evolution. This constitutes a qualitative difference in the characteristics of the two time series, which should be further investigated from the standpoint of climate dynamics.
The nucleotides guanosine tetraphosphate (ppGpp) and guanosine pentaphosphate (pppGpp) bind to target proteins to promote bacterial survival (Corrigan et al. 2016). Thus, the binding of the nucleotides to RsgA, a GTPase, inhibits the hydrolysis of GTP. The dose response, taken to be curvilinear with respect to the logarithm of the inhibitor concentration, is instead much better (P<0.001 when the 6 experiments are combined) represented as multiphasic, with high to exceedingly high absolute r values for the straight lines, and with transitions in the form of non-contiguities (jumps). Profiles for the binding of radiolabeled nucleotides to HprT and Gmk, GTP synthesis enzymes, were, similarly, taken to be curvilinear with respect to the logarithm of the protein concentration. However, the profiles are again much better represented as multiphasic than as curvilinear (the P values range from 0.047 to <0.001 for each of the 8 experiments for binding of ppGpp and pppGpp to HprT). The binding of GTP to HprT and the binding of the three nucleotides to Gmk are also poorly represented by curvilinear profiles, but well represented by multiphasic profiles (straight and, in part, parallel lines).
Electric double layer (EDL) formation underlies the functioning of supercapacitors and several other electrochemical technologies. Here, we study how the EDL formation near two flat blocking electrodes separated by 2L2L is affected by beyond-mean-field Coulombic interactions, which can be substantial for electrolytes of high salt concentration or with multivalent ions. Our model combines the Nernst-Planck and Bazant-Storey-Kornyshev (BSK) equations; the latter is a modified Poisson equation with a correlation length c\ell_c. In response to a voltage step, the system charges exponentially with a characteristic timescale τ\tau that depends nonmonotonically on c\ell_c. For small c\ell_c, τ\tau is given by the BSK capacitance times a dilute electrolyte's resistance, in line with [Zhao, Phys. Rev. E 84, 051504 (2011)]; here, τ\tau decreases with increasing c\ell_c. Increasing the correlation length beyond cL2/3λD1/3\ell_c\approx L^{2/3}\lambda_D^{1/3}, with λD\lambda_D the Debye length, τ\tau reaches a minimum, rises as $\tau\propto \lambda_D\ell_c/D,andplateausat, and plateaus at \tau=4L^2/(\pi^2 D)$. Our results imply that strongly correlated, strongly confined electrolytes - ionic liquids in the surface force balance apparatus, say - move slower than predicted so far.
Climate change and rapid urbanization have led to more frequent and severe flooding, causing significant damage. The existing literature on flood risk encompasses a variety of dimensions, such as physical, economic, social, political, environmental, infrastructural, and managerial aspects. This paper aims to provide an extensive review of proposed conceptual frameworks and their components used in flood risk assessment. For this purpose, Initially, conceptual frameworks were extracted to configure the components of flood risk including hazard, vulnerability, exposure, resilience, and susceptibility. Subsequently, a comprehensive set of criteria from the literature were identified, addressing risk components. In this paper, the risk conceptual framework is defined by the intersection of vulnerability and hazard. Vulnerability, shaped by exposure and susceptibility, can be reduced by enhancing resiliency, which includes coping and adaptive capacities. In total, 102 criteria/subcriteria were identified and classified into three hierarchical structures of hazard, susceptibility, and resilience. Finally, flood risk assessment methods were reviewed, with an emphasis on their applicability and characteristics. The review highlighted the strengths and limitations of various methods, providing a comprehensive overview of their suitability for different scenarios. The outcomes of this review could serve as a valuable reference for professionals involved in flood risk assessment, aiding in the identification of the most appropriate risk concepts, assessment criteria, and suitable methods for quantification based on the specific study area and data availability.
11 Sep 2018
In the paper \textit{Preconditioning by inverting the {L}aplacian; an analysis of the eigenvalues. IMA Journal of Numerical Analysis 29, 1 (2009), 24--42}, Nielsen, Hackbusch and Tveito study the operator generated by using the inverse of the Laplacian as preconditioner for second order elliptic PDEs (k(x)u)=f\nabla \cdot (k(x) \nabla u) = f. They prove that the range of k(x)k(x) is contained in the spectrum of the preconditioned operator, provided that kk is continuous. Their rigorous analysis only addresses mappings defined on infinite dimensional spaces, but the numerical experiments in the paper suggest that a similar property holds in the discrete case. % Motivated by this investigation, we analyze the eigenvalues of the matrix L1A\bf{L}^{-1}\bf{A}, where L\bf{L} and A{\bf{A}} are the stiffness matrices associated with the Laplace operator and general second order elliptic operators, respectively. Without any assumption about the continuity of k(x)k(x), we prove the existence of a one-to-one pairing between the eigenvalues of L1A\bf{L}^{-1}\bf{A} and the intervals determined by the images under k(x)k(x) of the supports of the FE nodal basis functions. As a consequence, we can show that the nodal values of k(x)k(x) yield accurate approximations of the eigenvalues of L1A\bf{L}^{-1}\bf{A}. Our theoretical results are illuminated by several numerical experiments.
Despite the dangers associated with tropical cyclones and their rainfall, the origins of storm moisture remains unclear. Existing studies have focused on the region 40-400 km from the cyclone center. It is known that the rainfall within this area cannot be explained by local processes alone but requires imported moisture. Nonetheless, the dynamics of this imported moisture appears unknown. Here, considering a region up to three thousand kilometers from storm center, we analyze precipitation, atmospheric moisture and movement velocities for North Atlantic hurricanes. Our findings indicate that even over such large areas a hurricane's rainfall cannot be accounted for by concurrent evaporation. We propose instead that a hurricane consumes pre-existing atmospheric water vapor as it moves. The propagation velocity of the cyclone, i.e. the difference between its movement velocity and the mean velocity of the surrounding air (steering flow), determines the water vapor budget. Water vapor available to the hurricane through its movement makes the hurricane self-sufficient at about 700 km from the hurricane center obviating the need to concentrate moisture from greater distances. Such hurricanes leave a dry wake, whereby rainfall is suppressed by up to 40 per cent compared to its long-term mean. The inner radius of this dry footprint approximately coincides with the radius of hurricane self-sufficiency with respect to water vapor. We discuss how Carnot efficiency considerations do not constrain the power of such open systems that deplete the pre-existing moisture. Our findings emphasize the incompletely understood role and importance of atmospheric moisture supplies, condensation and precipitation in hurricane dynamics.
Feature selection is an essential step in data science pipelines to reduce the complexity associated with large datasets. While much research on this topic focuses on optimizing predictive performance, few studies investigate stability in the context of the feature selection process. In this study, we present the Repeated Elastic Net Technique (RENT) for Feature Selection. RENT uses an ensemble of generalized linear models with elastic net regularization, each trained on distinct subsets of the training data. The feature selection is based on three criteria evaluating the weight distributions of features across all elementary models. This fact leads to the selection of features with high stability that improve the robustness of the final model. Furthermore, unlike established feature selectors, RENT provides valuable information for model interpretation concerning the identification of objects in the data that are difficult to predict during training. In our experiments, we benchmark RENT against six established feature selectors on eight multivariate datasets for binary classification and regression. In the experimental comparison, RENT shows a well-balanced trade-off between predictive performance and stability. Finally, we underline the additional interpretational value of RENT with an exploratory post-hoc analysis of a healthcare dataset.
Organic molecular ferroelectrics, including organic proton-transfer ferroelectrics and antiferroelectrics, are potentially attractive in organic electronics and have significant chemical tunability. Among these, acid-base proton transfer (PT) salts stand out due to their low coercive fields and the possibility to tune their properties with different acid-base combinations. Using crystal structure prediction, combining small acid and base organic molecular species, we here predict three novel acid-base PT ferroelectric salts with higher polarization than existing materials. We also report two combinations that form antiferroelectric crystal structures. However, some combinations also result in unfavorable packing or the formation of co-crystal or in one case a divalent salt. The protonation state is found to be highly linked to the crystal structure, with cases where salt crystal structures have the same energetic preferability as co-crystals with a different crystalline packing.
Selectively picking a target fruit surrounded by obstacles is one of the major challenges for fruit harvesting robots. Different from traditional obstacle avoidance methods, this paper presents an active obstacle separation strategy that combines push and drag motions. The separation motion and trajectory are generated based on the 3D visual perception of the obstacle information around the target. A linear push is used to clear the obstacles from the area below the target, while a zig-zag push that contains several linear motions is proposed to push aside more dense obstacles. The zig-zag push can generate multi-directional pushes and the side-to-side motion can break the static contact force between the target and obstacles, thus helping the gripper to receive a target in more complex situations. Moreover, we propose a novel drag operation to address the issue of mis-capturing obstacles located above the target, in which the gripper drags the target to a place with fewer obstacles and then pushes back to move the obstacles aside for further detachment. Furthermore, an image processing pipeline consisting of color thresholding, object detection using deep learning and point cloud operation, is developed to implement the proposed method on a harvesting robot. Field tests show that the proposed method can improve the picking performance substantially. This method helps to enable complex clusters of fruits to be harvested with a higher success rate than conventional methods.
The transition to a renewable energy system poses challenges for power grid operation and stability. Secondary control is key in restoring the power system to its reference following a disturbance. Underestimating the necessary control capacity may require emergency measures, such as load shedding. Hence, a solid understanding of the emerging risks and the driving factors of control is needed. In this contribution, we establish an explainable machine learning model for the activation of secondary control power in Germany. Training gradient boosted trees, we obtain an accurate description of control activation. Using SHapely Additive exPlanation (SHAP) values, we investigate the dependency between control activation and external features such as the generation mix, forecasting errors, and electricity market data. Thereby, our analysis reveals drivers that lead to high reserve requirements in the German power system. Our transparent approach, utilizing open data and making machine learning models interpretable, opens new scientific discovery avenues.
There are no more papers matching your filters at the moment.