Roskilde University
We present a simple and scalable implementation of next-generation reservoir computing for modeling dynamical systems from time series data. Our approach uses a pseudorandom nonlinear projection of time-delay embedded input, allowing an arbitrary dimension of the feature space, thus providing a flexible alternative to the polynomial-based projections used in previous next-generation reservoir computing variants. We apply the method to benchmark tasks -- including attractor reconstruction and bifurcation diagram estimation -- using only partial and noisy observations. We also include an exploratory example of estimating asymptotic oscillation phases. The models remain stable over long rollouts and generalize beyond training data. This framework enables the precise control of system state and is well suited for surrogate modeling and digital twin applications.
Last-mile routing refers to the final step in a supply chain, delivering packages from a depot station to the homes of customers. At the level of a single van driver, the task is a traveling salesman problem. But the choice of route may be constrained by warehouse sorting operations, van-loading processes, driver preferences, and other considerations, rather than a straightforward minimization of tour length. We propose a simple and efficient penalty-based local-search algorithm for route optimization in the presence of such constraints, adopting a technique developed by Helsgaun to extend the LKH traveling salesman problem code to general vehicle-routing models. We apply his technique to handle combinations of constraints obtained from an analysis of historical routing data, enforcing properties that are desired in high-quality solutions. Our code is available under the open-source MIT license. An earlier version of the code received the $100,000 top prize in the Amazon Last Mile Routing Research Challenge organized in 2021.
Encoding geospatial objects is fundamental for geospatial artificial intelligence (GeoAI) applications, which leverage machine learning (ML) models to analyze spatial information. Common approaches transform each object into known formats, like image and text, for compatibility with ML models. However, this process often discards crucial spatial information, such as the object's position relative to the entire space, reducing downstream task effectiveness. Alternative encoding methods that preserve some spatial properties are often devised for specific data objects (e.g., point encoders), making them unsuitable for tasks that involve different data types (i.e., points, polylines, and polygons). To this end, we propose Poly2Vec, a polymorphic Fourier-based encoding approach that unifies the representation of geospatial objects, while preserving the essential spatial properties. Poly2Vec incorporates a learned fusion module that adaptively integrates the magnitude and phase of the Fourier transform for different tasks and geometries. We evaluate Poly2Vec on five diverse tasks, organized into two categories. The first empirically demonstrates that Poly2Vec consistently outperforms object-specific baselines in preserving three key spatial relationships: topology, direction, and distance. The second shows that integrating Poly2Vec into a state-of-the-art GeoAI workflow improves the performance in two popular tasks: population prediction and land use inference.
The maritime industry is undergoing a significant digital transformation (DT) to enhance efficiency and sustainability. This focused review investigates the current state of literature on technostress and resistance to change among seafarers as they adapt to new digital technologies. By critically reviewing a focused selection of peer-reviewed articles, we identify the main themes and trends within maritime research on DT. Findings indicate that while mental health issues are a predominant concern, this is yet to also be investigated in the context of new technology introduction in an industry that is already setting seafarers under pressure. Additionally, change management is not addressed, and DT is limited to specific functionalities rather than embracing broad work practice transformations
The environmental impact of Artificial Intelligence (AI)-enabled systems is increasing rapidly, and software engineering plays a critical role in developing sustainable solutions. The "Greening AI with Software Engineering" CECAM-Lorentz workshop (no. 1358, 2025) funded by the Centre Européen de Calcul Atomique et Moléculaire and the Lorentz Center, provided an interdisciplinary forum for 29 participants, from practitioners to academics, to share knowledge, ideas, practices, and current results dedicated to advancing green software and AI research. The workshop was held February 3-7, 2025, in Lausanne, Switzerland. Through keynotes, flash talks, and collaborative discussions, participants identified and prioritized key challenges for the field. These included energy assessment and standardization, benchmarking practices, sustainability-aware architectures, runtime adaptation, empirical methodologies, and education. This report presents a research agenda emerging from the workshop, outlining open research directions and practical recommendations to guide the development of environmentally sustainable AI-enabled systems rooted in software engineering principles.
LightCTS introduces a lightweight framework for correlated time series forecasting, achieving accuracy comparable to state-of-the-art models while drastically reducing computational and storage overheads. The framework's novel architectural designs and operator modules make advanced forecasting feasible for deployment on resource-constrained edge devices.
1
Feynman gave in 1982 a keynote speech \textit{Simulating Physics with Computers} (Int. J. Theor. Phys. {\bf 21}, 467 (1982)) in which he talked ``...about the possibility...that the computer will do exactly the same as nature". The motivation was that: ``...the physical world is quantum mechanical, and therefore the proper problem is the simulation of quantum physics". Here I try after more than forty years to answer Feynman's question of whether it is possible to perform exact computer simulations. Many computer simulations are not exact, they contain mean field approximations that disobey the symmetry in the quantum dynamics with Newton's third law, e.g. almost all astrophysical simulations of galaxy systems. After a review of computer simulations and the problems of simulating real systems, I argue that Newton's discrete dynamics, which is used in almost all computer simulations and which is exact in the same sense as Newton's analytic dynamics, is the classical limit path of Feynman's quantum paths. However, the physical world is not known exactly and it is much more complex than any simulated systems, and so far no real systems have been simulated exactly. Hence, more than forty years later, and after hundreds of thousands of computer simulations of the physical system's dynamics the answer to Feynman's question is still negative. But although it is not possible to simulate the dynamics exactly for any real systems, the simulations have been and will be of great use in Natural Science.
We present TTCF4LAMMPS, a toolkit for performing non-equilibrium molecular dynamics (NEMD) simulations to study fluid behaviour at low shear rates using the LAMMPS software. By combining direct NEMD simulations and the transient-time correlation function (TTCF) technique, we study the behaviour of fluids over shear rates spanning 1515 orders of magnitude. We present two example systems consisting of simple monatomic systems: one containing a bulk liquid and another with a liquid layer confined between two solid walls. The small bulk system is suitable for testing on personal computers, while the larger confined system requires high-performance computing (HPC) resources. We demonstrate that the TTCF formalism can successfully detect the system response for arbitrarily weak external fields. We provide a brief mathematical explanation for this feature. Although we showcase the method for simple monatomic systems, TTCF can be readily extended to study more complex molecular fluids. Moreover, in addition to shear flows, the method can be extended to investigate elongational or mixed flows as well as thermal or electric fields. The reasonably high computational cost needed for the method is offset by the two following benefits: i) the cost is independent of the magnitude of the external field, and ii) the simulations can be made highly efficient on HPC architectures by exploiting the parallel design of LAMMPS. We expect the toolkit to be useful for computational researchers striving to study the nonequilibrium behaviour of fluids under experimentally-accessible conditions.
This paper presents a set of general strategies for the analysis of structure in amorphous materials and a general approach to assessing the utility of a selected structural description. Measures of structural diversity and utility are defined and applied to two model glass forming binary atomic alloys. In addition, a new measure of incipient crystal-like organization is introduced, suitable for cases where the stable crystal is a compound structure.
Indoor venues accommodate many people who collectively form crowds. Such crowds in turn influence people's routing choices, e.g., people may prefer to avoid crowded rooms when walking from A to B. This paper studies two types of crowd-aware indoor path planning queries. The Indoor Crowd-Aware Fastest Path Query (FPQ) finds a path with the shortest travel time in the presence of crowds, whereas the Indoor Least Crowded Path Query (LCPQ) finds a path encountering the least objects en route. To process the queries, we design a unified framework with three major components. First, an indoor crowd model organizes indoor topology and captures object flows between rooms. Second, a time-evolving population estimator derives room populations for a future timestamp to support crowd-aware routing cost computations in query processing. Third, two exact and two approximate query processing algorithms process each type of query. All algorithms are based on graph traversal over the indoor crowd model and use the same search framework with different strategies of updating the populations during the search process. All proposals are evaluated experimentally on synthetic and real data. The experimental results demonstrate the efficiency and scalability of our framework and query processing algorithms.
Inference in dynamic probabilistic models is a complex task involving expensive operations. In particular, for Hidden Markov Models, the whole state space has to be enumerated for advancing in time. Even states with negligible probabilities are considered, resulting in computational inefficiency and increased noise due to the propagation of unlikely probability mass. We propose to denoise the future and speed up inference by using only the top-p states, i.e., the most probable states with accumulated probability p. We show that the error introduced by using only the top-p states is bound by p and the so-called minimal mixing rate of the underlying model. Moreover, in our empirical evaluation, we show that we can expect speedups of at least an order of magnitude, while the error in terms of total variation distance is below 0.09.
RUMD is a general purpose, high-performance molecular dynamics (MD) simulation package running on graphical processing units (GPU's). RUMD addresses the challenge of utilizing the many-core nature of modern GPU hardware when simulating small to medium system sizes (roughly from a few thousand up to hundred thousand particles). It has a performance that is comparable to other GPU-MD codes at large system sizes and substantially better at smaller this http URL is open-source and consists of a library written in C++ and the CUDA extension to C, an easy-to-use Python interface, and a set of tools for set-up and post-simulation data analysis. The paper describes RUMD's main features, optimizations and performance benchmarks.
In a recent paper, Di Lisio et al. [J. Chem. Phys. 159 064505 (2023)] analyzed a series of temperature down-jumps using the single-parameter aging (SPA) ansatz combined with a specific assumption about density scaling in the out-of-equilibrium system and did not find a good prediction for the largest down-jumps. In this paper we show that SPA in its original form does work for all their data including large jumps of \Delta T > 20 K. Furthermore, we discuss different approaches to the extension of the density scaling concept to out-of-equilibrium systems.
Quantum gravity was born as that branch of modern theoretical physics that tries to unify its guiding principles, i.e., quantum mechanics and general relativity. Nowadays it is providing new insight into the unification of all fundamental interactions, while giving rise to new developments in modern mathematics. It is however unclear whether it will ever become a falsifiable physical theory, since it deals with Planck-scale physics. Reviewing a wide range of spectral geometry from index theory to spectral triples, we hope to dismiss the general opinion that the mere mathematical complexity of the unification programme will obstruct that programme.
We present an exact dimensional reduction for high-dimensional dynamical systems composed of NN identical dynamical units governed by quasi-linear ordinary differential equations (ODEs) of order MM. In these systems, each unit follows a linear differential equation whose coefficients depend nonlinearly on the ensemble variables, such as a mean field variable. We derive M+1M+1 closed-form macroscopic equations of order MM with variables that exactly capture the full microscopic dimensional dynamics and that allow reconstruction of individual trajectories from the reduced system. Our approach enables low-dimensional analysis of collective behavior in coupled oscillator networks and provides computationally efficient exact representations of large-scale dynamics. We illustrate the theory with examples, highlighting new families of solvable models relevant to physics, biology and engineering that are now amenable to simplified analysis.
Quantum gravity was born as that branch of modern theoretical physics that tries to unify its guiding principles, i.e., quantum mechanics and general relativity. Nowadays it is providing new insight into the unification of all fundamental interactions, while giving rise to new developments in modern mathematics. It is however unclear whether it will ever become a falsifiable physical theory, since it deals with Planck-scale physics. Reviewing a wide range of spectral geometry from index theory to spectral triples, we hope to dismiss the general opinion that the mere mathematical complexity of the unification programme will obstruct that programme.
Sensor data streams occur widely in various real-time applications in the context of the Internet of Things (IoT). However, sensor data streams feature missing values due to factors such as sensor failures, communication errors, or depleted batteries. Missing values can compromise the quality of real-time analytics tasks and downstream applications. Existing imputation methods either make strong assumptions about streams or have low efficiency. In this study, we aim to accurately and efficiently impute missing values in data streams that satisfy only general characteristics in order to benefit real-time applications more widely. First, we propose a message propagation imputation network (MPIN) that is able to recover the missing values of data instances in a time window. We give a theoretical analysis of why MPIN is effective. Second, we present a continuous imputation framework that consists of data update and model update mechanisms to enable MPIN to perform continuous imputation both effectively and efficiently. Extensive experiments on multiple real datasets show that MPIN can outperform the existing data imputers by wide margins and that the continuous imputation framework is efficient and accurate.
The rise of neurotechnologies, especially in combination with AI-based methods for brain data analytics, has given rise to concerns around the protection of mental privacy, mental integrity and cognitive liberty - often framed as 'neurorights' in ethical, legal and policy discussions. Several states are now looking at including 'neurorights' into their constitutional legal frameworks and international institutions and organizations, such as UNESCO and the Council of Europe, are taking an active interest in developing international policy and governance guidelines on this issue. However, in many discussions of 'neurorights' the philosophical assumptions, ethical frames of reference and legal interpretation are either not made explicit or are in conflict with each other. The aim of this multidisciplinary work here is to provide conceptual, ethical and legal foundations that allow for facilitating a common minimalist conceptual understanding of mental privacy, mental integrity and cognitive liberty to facilitate scholarly, legal and policy discussions.
By combining interface-pinning simulations with numerical integration of the Clausius-Clapeyron equation we determine accurately the melting-line coexistence pressure and fluid/crystal densities of the Weeks-Chandler-Andersen (WCA) system covering four decades of temperature. The data are used for comparing the melting-line predictions of the Boltzmann, Andersen-Weeks-Chandler, Barker-Henderson, and Stillinger hard-sphere approximations. The Andersen-Weeks-Chandler and the Barker-Henderson theories give the most accurate predictions, and they both work excellently in the zero-temperature limit for which analytical expressions are derived here.
This paper presents a set of general strategies for the analysis of structure in amorphous materials and a general approach to assessing the utility of a selected structural description. Measures of structural diversity and utility are defined and applied to two model glass forming binary atomic alloys. In addition, a new measure of incipient crystal-like organization is introduced, suitable for cases where the stable crystal is a compound structure.
There are no more papers matching your filters at the moment.