Linnaeus University
Quantum-like modeling (QLM) - quantum theory applications outside of physics - are intensively developed with applications in biology, cognition, psychology, and decision-making. For cognition, QLM should be distinguished from quantum reductionist models in the spirit of Hameroff and Penrose and well as Umezawa and Vitiello. QLM is not concerned with just quantum physical processes in the brain but also QL information processing by macroscopic neuronal structures. Although QLM of cognition and decision-making has seen some success, it suffers from a knowledge gap that exists between oscillatory neuronal network functioning in the brain and QL behavioral patterns. Recently, steps toward closing this gap have been taken using the generalized probability theory and prequantum classical statistical field theory (PCSFT) - a random field model beyond the complex Hilbert space formalism. PCSFT is used to move from the classical ``oscillatory cognition'' of the neuronal networks to QLM for this http URL. In this study, we addressed the most difficult problem within this construction: QLM for entanglement generation by classical networks, i.e., mental entanglement. We started with the observational approach to entanglement based on operator algebras describing local observables and bringing into being the tensor product structure in the space of QL states. Moreover, we applied the standard states entanglement approach: entanglement generation by spatially separated networks in the brain. Finally, we discussed possible future experiments on mental entanglement detection using the EEG/MEG technique.
By uncovering the contrast between Artificial Intelligence and Natural-born Intelligence as a computational process, we define closed computing and open computing, and implement open computing within chemical reactions. This involves forming a mixture and invalidation of the computational process and the execution environment, which are logically distinct, and coalescing both to create a system that adjusts fluctuations. We model chemical reactions by considering the computation as the chemical reaction and the execution environment as the degree of aggregation of molecules that interact with the reactive environment. This results in a chemical reaction that progresses while repeatedly clustering and de-clustering, where concentration no longer holds significant meaning. Open computing is segmented into Token computing, which focuses on the individual behavior of chemical molecules, and Type computing, which focuses on normative behavior. Ultimately, both are constructed as an interplay between the two. In this system, Token computing demonstrates self-organizing critical phenomena, while Type computing exhibits quantum logic. Through their interplay, the recruitment of fluctuations is realized, giving rise to interactions between quantum logical subspaces corresponding to quantum coherence across different Hilbert spaces. As a result, spike waves are formed, enabling signal transmission. This occurrence may be termed quantum-like coherence, implying the source of enzymes responsible for controlling spike waves and biochemical rhythms.
Researchers from São Paulo State University, Eindhoven University of Technology, and Linnaeus University developed HUMAP, a hierarchical dimensionality reduction technique built on UMAP principles. This method systematically constructs a hierarchy and preserves the mental map across different levels of detail, demonstrating competitive performance against existing hierarchical methods in both structure preservation and runtime on diverse datasets.
This paper examines a Contract for Difference (CfD) with early exit options, a key risk management tool in electricity markets. The contract, involving a producer and a regulatory entity, is modeled as a two-player Dynkin game with mean-reverting electricity prices and penalties for early termination. We formulate the strategic interaction using Doubly Reflected Backward Stochastic Differential Equations (DRBSDEs), which characterize the fair contract value and optimal stopping strategies. We show that the first component of the DRBSDE solution represents the value of the Dynkin game, and that the first hitting times correspond to a Nash equilibrium. Additionally, we link the problem to a Skorokhod problem with time-dependent boundaries, deriving an explicit formula for the Skorokhod adjustment processes. To solve the DRBSDE, we develop a deep learning-based numerical algorithm, leveraging neural networks for efficient computation. We analyze the convergence of the deep learning algorithm, as well as the value function and optimal stopping rules. Numerical experiments, including a CfD model calibrated on French electricity prices, highlight the impact of exit penalties, price volatility, and contract design. These findings offer insights for market regulators and energy producers in designing effective risk management strategies.
28 Aug 2025
We develop an asymptotic limit theory for nonparametric estimation of the noise covariance kernel in linear parabolic stochastic partial differential equations (SPDEs) with additive colored noise, using space-time infill asymptotics. The method employs discretized infinite-dimensional realized covariations and requires only mild regularity assumptions on the kernel to ensure consistent estimation and asymptotic normality of the estimator. On this basis, we construct omnibus goodness-of-fit tests for the noise covariance that are independent of the SPDE's differential operator. Our framework accommodates a variety of spatial sampling schemes and allows for reliable inference even when spatial resolution is coarser than temporal resolution.
The central region of the Milky Way is one of the foremost locations to look for dark matter (DM) signatures. We report the first results on a search for DM particle annihilation signals using new observations from an unprecedented gamma-ray survey of the Galactic Center (GC) region, i.e.{\it i.e.}, the Inner Galaxy Survey, at very high energies (\gtrsim 100 GeV) performed with the H.E.S.S. array of five ground-based Cherenkov telescopes. No significant gamma-ray excess is found in the search region of the 2014-2020 dataset and a profile likelihood ratio analysis is carried out to set exclusion limits on the annihilation cross section σv\langle \sigma v\rangle. Assuming Einasto and Navarro-Frenk-White (NFW) DM density profiles at the GC, these constraints are the strongest obtained so far in the TeV DM mass range. For the Einasto profile, the constraints reach σv\langle \sigma v\rangle values of 3.7×1026cm3s1\rm 3.7\times10^{-26} cm^3s^{-1} for 1.5 TeV DM mass in the W+WW^+W^- annihilation channel, and 1.2×1026cm3s1\rm 1.2 \times 10^{-26} cm^3s^{-1} for 0.7 TeV DM mass in the τ+τ\tau^+\tau^- annihilation channel. With the H.E.S.S. Inner Galaxy Survey, ground-based γ\gamma-ray observations thus probe σv\langle \sigma v\rangle values expected from thermal-relic annihilating TeV DM particles.
This paper is devoted to clarification of the notion of entanglement through decoupling it from the tensor product structure and treating as a constraint posed by probabilistic dependence of quantum observable A and B. In our framework, it is meaningless to speak about entanglement without pointing to the fixed observables A and B, so this is AB-entanglement. Dependence of quantum observables is formalized as non-coincidence of conditional probabilities. Starting with this probabilistic definition, we achieve the Hilbert space characterization of the AB-entangled states as amplitude non-factorisable states. In the tensor product case, ABAB-entanglement implies standard entanglement, but not vice verse. AB-entanglement for dichotomous observables is equivalent to their correlation. We describe the class of quantum states that are A_u B_u-entangled for a family of unitary operators (u). Finally, observables entanglement is compared with dependence of random variables in classical probability theory.
Diagnosing and treating abnormalities in the wrist, specifically distal radius, and ulna fractures, is a crucial concern among children, adolescents, and young adults, with a higher incidence rate during puberty. However, the scarcity of radiologists and the lack of specialized training among medical professionals pose a significant risk to patient care. This problem is further exacerbated by the rising number of imaging studies and limited access to specialist reporting in certain regions. This highlights the need for innovative solutions to improve the diagnosis and treatment of wrist abnormalities. Automated wrist fracture detection using object detection has shown potential, but current studies mainly use two-stage detection methods with limited evidence for single-stage effectiveness. This study employs state-of-the-art single-stage deep neural network-based detection models YOLOv5, YOLOv6, YOLOv7, and YOLOv8 to detect wrist abnormalities. Through extensive experimentation, we found that these YOLO models outperform the commonly used two-stage detection algorithm, Faster R-CNN, in fracture detection. Additionally, compound-scaled variants of each YOLO model were compared, with YOLOv8m demonstrating a highest fracture detection sensitivity of 0.92 and mean average precision (mAP) of 0.95. On the other hand, YOLOv6m achieved the highest sensitivity across all classes at 0.83. Meanwhile, YOLOv8x recorded the highest mAP of 0.77 for all classes on the GRAZPEDWRI-DX pediatric wrist dataset, highlighting the potential of single-stage models for enhancing pediatric wrist imaging.
5
A systematic literature review provides a comprehensive analysis of A/B testing practices by detailing the subjects of tests, methodologies for design and execution, and the roles of stakeholders across various domains. The review identifies variations in A/B testing application, highlights common practices, and outlines open research problems that include refining statistical methods and enhancing test process automation.
Recently, we witness a rapid increase in the use of machine learning in self-adaptive systems. Machine learning has been used for a variety of reasons, ranging from learning a model of the environment of a system during operation to filtering large sets of possible configurations before analysing them. While a body of work on the use of machine learning in self-adaptive systems exists, there is currently no systematic overview of this area. Such overview is important for researchers to understand the state of the art and direct future research efforts. This paper reports the results of a systematic literature review that aims at providing such an overview. We focus on self-adaptive systems that are based on a traditional Monitor-Analyze-Plan-Execute feedback loop (MAPE). The research questions are centred on the problems that motivate the use of machine learning in self-adaptive systems, the key engineering aspects of learning in self-adaptation, and open challenges. The search resulted in 6709 papers, of which 109 were retained for data collection. Analysis of the collected data shows that machine learning is mostly used for updating adaptation rules and policies to improve system qualities, and managing resources to better balance qualities and resources. These problems are primarily solved using supervised and interactive learning with classification, regression and reinforcement learning as the dominant methods. Surprisingly, unsupervised learning that naturally fits automation is only applied in a small number of studies. Key open challenges in this area include the performance of learning, managing the effects of learning, and dealing with more complex types of goals. From the insights derived from this systematic literature review we outline an initial design process for applying machine learning in self-adaptive systems that are based on MAPE feedback loops.
Using individual-level data from Gallup World Poll and the Levada Center, we provide an in-depth analysis of how Russia's invasion of Ukraine has affected sentiments in the Russian population. Our results show that after the invasion, a larger share of Russians expressed support for President Putin, optimism about the future, anti-West attitudes, and lower migration aspirations. The 2022 mobilization and the 2023 Wagner rebellion had only short-lived and no effects on sentiments, respectively. Regions with low pre-war support for Putin displayed stronger rally effects, higher casualty rates, and increased incomes, suggesting a recruitment strategy that maximizes political support. Taken together, our results suggest strong public support for the war, except among Russians abroad who became more critical of Putin in line with global views.
The past few years have seen a surge in the application of quantum theory methodologies and quantum-like modeling in fields such as cognition, psychology, and decision-making. Despite the success of this approach in explaining various psychological phenomena such as order, conjunction, disjunction, and response replicability effects there remains a potential dissatisfaction due to its lack of clear connection to neurophysiological processes in the brain. Currently, it remains a phenomenological approach. In this paper, we develop a quantum-like representation of networks of communicating neurons. This representation is not based on standard quantum theory but on generalized probability theory (GPT), with a focus on the operational measurement framework. Specifically, we use a version of GPT that relies on ordered linear state spaces rather than the traditional complex Hilbert spaces. A network of communicating neurons is modeled as a weighted directed graph, which is encoded by its weight matrix. The state space of these weight matrices is embedded within the GPT framework, incorporating effect observables and state updates within the theory of measurement instruments a critical aspect of this model. This GPT based approach successfully reproduces key quantum-like effects, such as order, non-repeatability, and disjunction effects (commonly associated with decision interference). Moreover, this framework supports quantum-like modeling in medical diagnostics for neurological conditions such as depression and epilepsy. While this paper focuses primarily on cognition and neuronal networks, the proposed formalism and methodology can be directly applied to a wide range of biological and social networks.
With growing expectations to use AI-based educational technology (AI-EdTech) to improve students' learning outcomes and enrich teaching practice, teachers play a central role in the adoption of AI-EdTech in classrooms. Teachers' willingness to accept vulnerability by integrating technology into their everyday teaching practice, that is, their trust in AI-EdTech, will depend on how much they expect it to benefit them versus how many concerns it raises for them. In this study, we surveyed 508 K-12 teachers across six countries on four continents to understand which teacher characteristics shape teachers' trust in AI-EdTech, and its proposed antecedents, perceived benefits and concerns about AI-EdTech. We examined a comprehensive set of characteristics including demographic and professional characteristics (age, gender, subject, years of experience, etc.), cultural values (Hofstede's cultural dimensions), geographic locations (Brazil, Israel, Japan, Norway, Sweden, USA), and psychological factors (self-efficacy and understanding). Using multiple regression analysis, we found that teachers with higher AI-EdTech self-efficacy and AI understanding perceive more benefits, fewer concerns, and report more trust in AI-EdTech. We also found geographic and cultural differences in teachers' trust in AI-EdTech, but no demographic differences emerged based on their age, gender, or level of education. The findings provide a comprehensive, international account of factors associated with teachers' trust in AI-EdTech. Efforts to raise teachers' understanding of, and trust in AI-EdTech, while considering their cultural values are encouraged to support its adoption in K-12 education.
Three-dimensional magnetic textures, such as Hopfions, torons, and skyrmion tubes, possess rich geometric and topological structure, but their detailed energetics, deformation modes, and collective behavior are yet to be fully understood. In this work, we develop an effective geometric theory for general three-dimensional textures by representing them as embedded two-dimensional orientable domain-wall membranes. Using a local ansatz for the magnetization in terms of membrane coordinates, we integrate out the internal domain-wall profile to obtain a reduced two-dimensional energy functional. This functional captures the coupling between curvature, topology, and the interplay of micromagnetic energies, and is expressed in terms of a small set of soft-mode fields: the local wall thickness and in-plane magnetization angle. Additionally, we construct a local formula for the Hopf index which sheds light on the coupling between geometry and topology for nontrivial textures. We analyze the general properties of the theory and demonstrate its utility through the example of a flat membrane hosting a vortex as well as a toroidal Hopfion, obtaining analytic solutions for the wall thickness profile, associated energetics, and a confirmation of the Hopf index formula. The framework naturally extends to more complex geometries and can accommodate additional interactions such as Dzyaloshinskii-Moriya, Zeeman, and other anisotropies, making it a versatile tool for exploring the interplay between geometry, topology, and micromagnetics in three-dimensional spin systems.
Supervised learning of Convolutional Neural Networks (CNNs), also known as supervised Deep Learning, is a computationally demanding process. To find the most suitable parameters of a network for a given application, numerous training sessions are required. Therefore, reducing the training time per session is essential to fully utilize CNNs in practice. While numerous research groups have addressed the training of CNNs using GPUs, so far not much attention has been paid to the Intel Xeon Phi coprocessor. In this paper we investigate empirically and theoretically the potential of the Intel Xeon Phi for supervised learning of CNNs. We design and implement a parallelization scheme named CHAOS that exploits both the thread- and SIMD-parallelism of the coprocessor. Our approach is evaluated on the Intel Xeon Phi 7120P using the MNIST dataset of handwritten digits for various thread counts and CNN architectures. Results show a 103.5x speed up when training our large network for 15 epochs using 244 threads, compared to one thread on the coprocessor. Moreover, we develop a performance model and use it to assess our implementation and answer what-if questions.
We develop a method to search for pair halos around active galactic nuclei (AGN) through a temporal analysis of gamma-ray data. The basis of our method is an analysis of the spatial distributions of photons coming from AGN flares and from AGN quiescent states and a further comparison of these two spatial distributions. This method can also be used for a reconstruction of a point spread function (PSF). We found no evidence for a pair halo component through this method by applying it to the Fermi-LAT data in the energy bands of 4.5-6, 6-10, and >10 GeV and set upper limits on the fraction of photons attributable to a pair halo component. An illustration of how to reconstruct the PSF of Fermi-LAT is given. We demonstrate that the PSF reconstructed by using this method is in good agreement with that which was obtained by using the gamma-ray data taken by LAT in the direction of the Crab pulsar and nebula.
We prove the local solvability of the p-adic analog of the Navier-Stokes equation. This equation describes, within the p-adic model of porous medium, the flow of a fluid in capillaries.
Large language models (LLMs), such as ChatGPT and Copilot, are transforming software development by automating code generation and, arguably, enable rapid prototyping, support education, and boost productivity. Therefore, correctness and quality of the generated code should be on par with manually written code. To assess the current state of LLMs in generating correct code of high quality, we conducted controlled experiments with ChatGPT and Copilot: we let the LLMs generate simple algorithms in Java and Python along with the corresponding unit tests and assessed the correctness and the quality (coverage) of the generated (test) codes. We observed significant differences between the LLMs, between the languages, between algorithm and test codes, and over time. The present paper reports these results together with the experimental methods allowing repeated and comparable assessments for more algorithms, languages, and LLMs over time.
The celebrating theorem of A. Fine implies that the CHSH inequality is violated if and only if the joint probability distribution for the quadruples of observables involved the EPR-Bohm-Bell experiment does not exist, i.e., it is impossible to use the classical probabilistic model (Kolmogorov, 1933). In this note we demonstrate that, in spite of Fine's theorem, the results of observations in the EPR-Bohm-Bell experiment can be described in the classical probabilistic framework. However, the "quantum probabilities" have to be interpreted as conditional probabilities, where conditioning is with respect to fixed experimental settings. Our approach is based on the complete account of randomness involved in the experiment. The crucial point is that randomness of selections of experimental settings has to be taken into account. This approach can be applied to any complex experiment in which statistical data are collected for various (in general incompatible) experimental settings. Finally, we emphasize that our construction of the classical probability space for the EPR-Bohm-Bell experiment cannot be used to support the hidden variable approach to the quantum phenomena. The classical random parameter ω\omega involved in our considerations cannot be identified with the hidden variable λ\lambda which is used the Bell-type considerations.
The recent surge in the field of generative artificial intelligence (GenAI) has the potential to bring about transformative changes across a range of sectors, including software engineering and education. As GenAI tools, such as OpenAI's ChatGPT, are increasingly utilised in software engineering, it becomes imperative to understand the impact of these technologies on the software product. This study employs a methodological approach, comprising web scraping and data mining from LeetCode, with the objective of comparing the software quality of Python programs produced by LeetCode users with that generated by GPT-4o. In order to gain insight into these matters, this study addresses the question whether GPT-4o produces software of superior quality to that produced by humans. The findings indicate that GPT-4o does not present a considerable impediment to code quality, understandability, or runtime when generating code on a limited scale. Indeed, the generated code even exhibits significantly lower values across all three metrics in comparison to the user-written code. However, no significantly superior values were observed for the generated code in terms of memory usage in comparison to the user code, which contravened the expectations. Furthermore, it will be demonstrated that GPT-4o encountered challenges in generalising to problems that were not included in the training data set. This contribution presents a first large-scale study comparing generated code with human-written code based on LeetCode platform based on multiple measures including code quality, code understandability, time behaviour and resource utilisation. All data is publicly available for further research.
There are no more papers matching your filters at the moment.