The Czech Academy of Sciences
In this paper, we consider an analysis of temporal properties of hybrid systems based on simulations, so-called falsification of requirements. We present a novel exploration-based algorithm for falsification of black-box models of hybrid systems based on the Voronoi bias in the output space. This approach is inspired by techniques used originally in motion planning: rapidly exploring random trees. Instead of commonly employed exploration that is based on coverage of inputs, the proposed algorithm aims to cover all possible outputs directly. Compared to other state-of-the-art falsification tools, it also does not require robustness or other guidance metrics tied to a specific behavior that is being falsified. This allows our algorithm to falsify specifications for which robustness is not conclusive enough to guide the falsification procedure.
We propose a novel machine learning approach for forecasting the distribution of stock returns using a rich set of firm-level and market predictors. Our method combines a two-stage quantile neural network with spline interpolation to construct smooth, flexible cumulative distribution functions without relying on restrictive parametric assumptions. This allows accurate modelling of non-Gaussian features such as fat tails and asymmetries. Furthermore, we show how to derive other statistics from the forecasted return distribution such as mean, variance, skewness, and kurtosis. The derived mean and variance forecasts offer significantly improved out-of-sample performance compared to standard models. We demonstrate the robustness of the method in US and international markets.
The world is abundant with diverse materials, each possessing unique surface appearances that play a crucial role in our daily perception and understanding of their properties. Despite advancements in technology enabling the capture and realistic reproduction of material appearances for visualization and quality control, the interoperability of material property information across various measurement representations and software platforms remains a complex challenge. A key to overcoming this challenge lies in the automatic identification of materials' perceptual features, enabling intuitive differentiation of properties stored in disparate material data representations. We reasoned that for many practical purposes, a compact representation of the perceptual appearance is more useful than an exhaustive physical this http URL paper introduces a novel approach to material identification by encoding perceptual features obtained from dynamic visual stimuli. We conducted a psychophysical experiment to select and validate 16 particularly significant perceptual attributes obtained from videos of 347 materials. We then gathered attribute ratings from over twenty participants for each material, creating a 'material fingerprint' that encodes the unique perceptual properties of each material. Finally, we trained a multi-layer perceptron model to predict the relationship between statistical and deep learning image features and their corresponding perceptual properties. We demonstrate the model's performance in material retrieval and filtering according to individual attributes. This model represents a significant step towards simplifying the sharing and understanding of material properties in diverse digital environments regardless of their digital representation, enhancing both the accuracy and efficiency of material identification.
06 Oct 2025
We propose a multi-agent epistemic logic capturing reasoning with degrees of plausibility that agents can assign to a given statement, with 11 interpreted as "entirely plausible for the agent" and 00 as "completely implausible" (i.e., the agent knows that the statement is false). We formalise such reasoning in an expansion of Gödel fuzzy logic with an involutive negation and multiple S5\mathbf{S5}-like modalities. As already Gödel single-modal logics are known to lack the finite model property w.r.t. their standard [0,1][0,1]-valued Kripke semantics, we provide an alternative semantics that allows for the finite model property. For this semantics, we construct a strongly terminating tableaux calculus that allows us to produce finite counter-models of non-valid formulas. We then use the tableaux to show that the validity problem in our logic is PSpace\mathsf{PSpace}-complete when there are two or more agents, and coNP\mathsf{coNP}-complete for the single-agent case.
We report a search for a magnetic monopole component of the cosmic-ray flux in a 95-day exposure of the NOvA experiment's Far Detector, a 14 kt segmented liquid scintillator detector designed primarily to observe GeV-scale electron neutrinos. No events consistent with monopoles were observed, setting an upper limit on the flux of 2×1014cm2s1sr12\times 10^{-14} \mathrm{cm^{-2}s^{-1}sr^{-1}} at 90% C.L. for monopole speed 6\times 10^{-4} < \beta < 5\times 10^{-3} and mass greater than 5×1085\times 10^{8} GeV. Because of NOvA's small overburden of 3 meters-water equivalent, this constraint covers a previously unexplored low-mass region.
The metric dimension of a graph measures how uniquely vertices may be identified using a set of landmark vertices. This concept is frequently used in the study of network architecture, location-based problems and communication. Given a graph GG, the metric dimension, denoted as dim(G)\dim(G), is the minimum size of a resolving set, a subset of vertices such that for every pair of vertices in GG, there exists a vertex in the resolving set whose shortest path distance to the two vertices is different. This subset of vertices helps to uniquely determine the location of other vertices in the graph. A basis is a resolving set with a least cardinality. Finding a basis is a problem with practical applications in network design, where it is important to efficiently locate and identify nodes based on a limited set of reference points. The Cartesian product of PmP_m and PnP_n is the grid network in network science. In this paper, we investigate two novel types of grids in network science: the Villarceau grid Type I and Type II. For each of these grid types, we find the precise metric dimension.
We study a geometric facility location problem under imprecision. Given nn unit intervals in the real line, each with one of kk colors, the goal is to place one point in each interval such that the resulting \emph{minimum color-spanning interval} is as large as possible. A minimum color-spanning interval is an interval of minimum size that contains at least one point from a given interval of each color. We prove that if the input intervals are pairwise disjoint, the problem can be solved in O(n)O(n) time, even for intervals of arbitrary length. For overlapping intervals, the problem becomes much more difficult. Nevertheless, we show that it can be solved in O(nlog2n)O(n \log^2 n) time when k=2k=2, by exploiting several structural properties of candidate solutions, combined with a number of advanced algorithmic techniques. Interestingly, this shows a sharp contrast with the 2-dimensional version of the problem, recently shown to be NP-hard.
The flux of cosmic ray muons at the Earth's surface exhibits seasonal variations due to changes in the temperature of the atmosphere affecting the production and decay of mesons in the upper atmosphere. Using data collected by the NOvA Near Detector during 2018--2022, we studied the seasonal pattern in the multiple-muon event rate. The data confirm an anticorrelation between the multiple-muon event rate and effective atmospheric temperature, consistent across all the years of data. Previous analyses from MINOS and NOvA saw a similar anticorrelation but did not include an explanation. We find that this anticorrelation is driven by altitude--geometry effects as the average muon production height changes with the season. This has been checked with a CORSIKA cosmic ray simulation package by varying atmospheric parameters, and provides an explanation to a longstanding discrepancy between the seasonal phases of single and multiple-muon events.
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We further demonstrate the applicability and advantages of our method to real world systems for the case of resting-state human brain networks. Finally, we show how our method can be used to estimate the structural network connectivity between interacting units from observed activity and establish the advantages over other approaches for the case of phase oscillator networks as a generic example.
This survey article gives an elementary introduction to the algebraic approach to Markov process duality, as opposed to the pathwise approach. In the algebraic approach, a Markov generator is written as the sum of products of simpler operators, which each have a dual with respect to some duality function. We discuss at length the recent suggestion by Giardinà, Redig, and others, that it may be a good idea to choose these simpler operators in such a way that they form an irreducible representation of some known Lie algebra. In particular, we collect the necessary background on representations of Lie algebras that is crucial for this approach. We also discuss older work by Lloyd and Sudbury on duality functions of product form and the relation between intertwining and duality.
StarBench is a project focused on benchmarking and validating different star-formation and stellar feedback codes. In this first StarBench paper we perform a comparison study of the D-type expansion of an HII region. The aim of this work is to understand the differences observed between the twelve participating numerical codes against the various analytical expressions examining the D-type phase of HII region expansion. To do this, we propose two well-defined tests which are tackled by 1D and 3D grid- and SPH- based codes. The first test examines the `early phase' D-type scenario during which the mechanical pressure driving the expansion is significantly larger than the thermal pressure of the neutral medium. The second test examines the `late phase' D-type scenario during which the system relaxes to pressure equilibrium with the external medium. Although they are mutually in excellent agreement, all twelve participating codes follow a modified expansion law that deviates significantly from the classical Spitzer solution in both scenarios. We present a semi-empirical formula combining the two different solutions appropriate to both early and late phases that agrees with high-resolution simulations to 2%\lesssim2\%. This formula provides a much better benchmark solution for code validation than the Spitzer solution. The present comparison has validated the participating codes and through this project we provide a dataset for calibrating the treatment of ionizing radiation hydrodynamics codes.
We describe a compilation language of backdoor decomposable monotone circuits (BDMCs) which generalizes several concepts appearing in the literature, e.g. DNNFs and backdoor trees. A C\mathcal{C}-BDMC sentence is a monotone circuit which satisfies decomposability property (such as in DNNF) in which the inputs (or leaves) are associated with CNF encodings from a given base class C\mathcal{C}. We consider the class of propagation complete (PC) encodings as a base class and we show that PC-BDMCs are polynomially equivalent to PC encodings. Additionally, we use this to determine the properties of PC-BDMCs and PC encodings with respect to the knowledge compilation map including the list of efficient operations on the languages.
We study the role of co-jumps in the interest rate futures markets. To disentangle continuous part of quadratic covariation from co-jumps, we localize the co-jumps precisely through wavelet coefficients and identify statistically significant ones. Using high frequency data about U.S. and European yield curves we quantify the effect of co-jumps on their correlation structure. Empirical findings reveal much stronger co-jumping behavior of the U.S. yield curves in comparison to the European one. Further, we connect co-jumping behavior to the monetary policy announcements, and study effect of 103 FOMC and 119 ECB announcements on the identified co-jumps during the period from January 2007 to December 2017.
7
We consider one-dimensional biased voter models, where 1's replace 0's at a faster rate than the other way round, started in a Heaviside initial state describing the interface between two infinite populations of 0's and 1's. In the limit of weak bias, for a diffusively rescaled process, we consider a measure-valued process describing the local fraction of type 1 sites as a function of time. Under a finite second moment condition on the rates, we show that in the diffusive scaling limit there is a drifted Brownian path with the property that all but a vanishingly small fraction of the sites on the left (resp. right) of this path are of type 0 (resp. 1). This extends known results for unbiased voter models. Our proofs depend crucially on recent results about interface tightness for biased voter models.
Time and again, non-conventional forms of Lagrangians with non-quadratic velocity dependence have found attention in the literature. For one thing, such Lagrangians have deep connections with several aspects of nonlinear dynamics including specifically the types of the Li\'{e}nard class; for another, very often the problem of their quantization opens up multiple branches of the corresponding Hamiltonians, ending up with the presence of singularities in the associated eigenfunctions. In this article, we furnish a brief review of the classical theory of such Lagrangians and the associated branched Hamiltonians, starting with the example of Li\'{e}nard-type systems. We then take up other cases where the Lagrangians depend upon the velocity with powers greater than two while still having a tractable mathematical structure, while also describing the associated branched Hamiltonians for such systems. For various examples, we emphasize upon the emergence of the notion of momentum-dependent mass in the theory of branched Hamiltonians.
Many animals emit vocal sounds which, independently from the sounds' function, embed some individually-distinctive signature. Thus the automatic recognition of individuals by sound is a potentially powerful tool for zoology and ecology research and practical monitoring. Here we present a general automatic identification method, that can work across multiple animal species with various levels of complexity in their communication systems. We further introduce new analysis techniques based on dataset manipulations that can evaluate the robustness and generality of a classifier. By using these techniques we confirmed the presence of experimental confounds in situations resembling those from past studies. We introduce data manipulations that can reduce the impact of these confounds, compatible with any classifier. We suggest that assessment of confounds should become a standard part of future studies to ensure they do not report over-optimistic results. We provide annotated recordings used for analyses along with this study and we call for dataset sharing to be a common practice to enhance development of methods and comparisons of results.
In this article, we study a generalized version of the maximum independent set and minimum dominating set problems, namely, the maximum dd-distance independent set problem and the minimum dd-distance dominating set problem on unit disk graphs for a positive integer d>0d>0. We first show that the maximum dd-distance independent set problem and the minimum dd-distance dominating set problem belongs to NP-hard class. Next, we propose a simple polynomial-time constant-factor approximation algorithms and PTAS for both the problems.
We show how bad and good volatility propagate through forex markets, i.e., we provide evidence for asymmetric volatility connectedness on forex markets. Using high-frequency, intra-day data of the most actively traded currencies over 2007 - 2015 we document the dominating asymmetries in spillovers that are due to bad rather than good volatility. We also show that negative spillovers are chiefly tied to the dragging sovereign debt crisis in Europe while positive spillovers are correlated with the subprime crisis, different monetary policies among key world central banks, and developments on commodities markets. It seems that a combination of monetary and real-economy events is behind the net positive asymmetries in volatility spillovers, while fiscal factors are linked with net negative spillovers.
We propose a novel framework for modeling time-varying persistence in economic time series, allowing for smoothly evolving heterogeneity in shock dynamics. We leverage localized regression techniques to flexibly identify changes in persistence over time, offering a data-driven alternative to traditional parametric models. We applied this methodology to U.S. inflation and stock market volatility data and found substantial persistence variations that align with key macroeconomic events and market conditions. The results reveal previously undetected pockets of predictability and provide significant increases in out-of-sample forecast accuracy. These findings have important implications for economic modeling, forecasting, and policy analysis.
This paper characterises dynamic linkages arising from shocks with heterogeneous degrees of persistence. Using frequency domain techniques, we introduce measures that identify smoothly varying links of a transitory and persistent nature. Our approach allows us to test for statistical differences in such dynamic links. We document substantial differences in transitory and persistent linkages among US financial industry volatilities, argue that they track heterogeneously persistent sources of systemic risk, and thus may serve as a useful tool for market participants.
There are no more papers matching your filters at the moment.