Technische Universität Dresden
The Image Biomarker Standardisation Initiative (IBSI) aims to improve reproducibility of radiomics studies by standardising the computational process of extracting image biomarkers (features) from images. We have previously established reference values for 169 commonly used features, created a standard radiomics image processing scheme, and developed reporting guidelines for radiomic studies. However, several aspects are not standardised. Here we present a complete version of a reference manual on the use of convolutional filters in radiomics and quantitative image analysis. Filters, such as wavelets or Laplacian of Gaussian filters, play an important part in emphasising specific image characteristics such as edges and blobs. Features derived from filter response maps were found to be poorly reproducible. This reference manual provides definitions for convolutional filters, parameters that should be reported, reference feature values, and tests to verify software compliance with the reference standard.
When conflicting images are presented to either eye, binocular fusion is disrupted. Rather than experiencing a blend of both percepts, often only one eye's image is experienced, whilst the other is suppressed from awareness. Importantly, suppression is transient - the two rival images compete for dominance, with stochastic switches between mutually exclusive percepts occurring every few seconds with law-like regularity. From the perspective of dynamical systems theory, visual rivalry offers an experimentally tractable window into the dynamical mechanisms governing perceptual awareness. In a recently developed visual rivalry paradigm - tracking continuous flash suppression (tCFS) - it was shown that the transition between awareness and suppression is hysteretic, with a higher contrast threshold required for a stimulus to breakthrough suppression into awareness than to be suppressed from awareness. Here, we present an analytically-tractable model of visual rivalry that quantitatively explains the hysteretic transition between periods of awareness and suppression in tCFS. Grounded in the theory of neural dynamics, we derive closed-form expressions for the duration of perceptual dominance and suppression, and for the degree of hysteresis (i.e. the depth of perceptual suppression), as a function of model parameters. Finally, our model yields a series of novel behavioural predictions, the first of which - distributions of dominance and suppression durations during tCFS should be approximately equal - we empirically validate in human psychophysical data.
Forecasting complex, chaotic signals is a central challenge across science and technology, with implications ranging from secure communications to climate modeling. Here we demonstrate that magnons - the collective spin excitations in magnetically ordered materials - can serve as an efficient physical reservoir for predicting such dynamics. Using a magnetic microdisk in the vortex state as a magnon-scattering reservoir, we show that intrinsic nonlinear interactions transform a simple microwave input into a high-dimensional spectral output suitable for reservoir computing, in particular, for time series predictions. Trained on the Mackey-Glass benchmark, which generates a cyclic yet aperiodic time series widely used to test machine-learning models, the system achieves accurate and reliable predictions that rival state-of-the-art physical reservoirs. We further identify key design principles: spectral resolution governs the trade-off between dimensionality and accuracy, while combining multiple device geometries systematically improves performance. These results establish magnonics as a promising platform for unconventional computing, offering a path toward scalable and CMOS-compatible hardware for real-time prediction tasks.
The fermion sign problem constitutes a fundamental computational bottleneck across a plethora of research fields in physics, quantum chemistry and related disciplines. Recently, it has been suggested to alleviate the sign problem in \emph{ab initio} path integral Molecular Dynamics and path integral Monte Carlo (PIMC) calculations based on the simulation of fictitious identical particles that are represented by a continuous quantum statistics variable ξ\xi [\textit{J.~Chem.~Phys.}~\textbf{157}, 094112 (2022)]. This idea facilitated a host of applications including the interpretation of an x-ray scattering experiment with strongly compressed beryllium at the National Ignition Facility [\textit{Nature Commun.}~\textbf{16}, 5103 (2025)]. In the present work, we express the original isothermal ξ\xi-extrapolation method as a special case of a truncated Taylor series expansion around the ξ=0\xi=0 limit of distinguishable particles. We derive new PIMC estimators that allow us to evaluate the Taylor coefficients up to arbitrary order and we carry out extensive new PIMC simulations of the warm dense electron gas to systematically analyze the sign problem from this new perspective. This gives us important insights into the applicability of the ξ\xi-extrapolation method for different levels of quantum degeneracy in terms of the Taylor series radius of convergence. Moreover, the direct PIMC evaluation of the ξ\xi-derivatives, in principle, removes the necessity for simulations at different values of ξ\xi and can facilitate more efficient simulations that are designed to maximize compute time in those regions of the full permutation space that contribute most to the final Taylor estimate of the fermionic expectation value of interest.
Hyperbolic space has become a popular choice of manifold for representation learning of various datatypes from tree-like structures and text to graphs. Building on the success of deep learning with prototypes in Euclidean and hyperspherical spaces, a few recent works have proposed hyperbolic prototypes for classification. Such approaches enable effective learning in low-dimensional output spaces and can exploit hierarchical relations amongst classes, but require privileged information about class labels to position the hyperbolic prototypes. In this work, we propose Hyperbolic Busemann Learning. The main idea behind our approach is to position prototypes on the ideal boundary of the Poincaré ball, which does not require prior label knowledge. To be able to compute proximities to ideal prototypes, we introduce the penalised Busemann loss. We provide theory supporting the use of ideal prototypes and the proposed loss by proving its equivalence to logistic regression in the one-dimensional case. Empirically, we show that our approach provides a natural interpretation of classification confidence, while outperforming recent hyperspherical and hyperbolic prototype approaches.
24
University of Cambridge logoUniversity of CambridgeUniversity of BernUniversity of EdinburghETH Zürich logoETH ZürichTechnische Universität DresdenUniversity of PisaStockholm University logoStockholm UniversitySorbonne Université logoSorbonne UniversitéUniversity of TurkuLeiden University logoLeiden UniversityUniversity of GenevaUniversity of BelgradeUniversity of ViennaUniversity of LeicesterUniversity of VigoUniversiteit LeidenObservatoire de ParisUniversité de LiègeINAF - Osservatorio Astrofisico di TorinoUniversity of Groningen logoUniversity of GroningenUniversity of BathLund UniversityUniversity of LausanneInstituto de Astrofísica de CanariasUniversity of AntioquiaEuropean Space AgencyUniversidad de ValparaísoUniversité de MonsELTE Eötvös Loránd UniversityUniversity of BordeauxObservatoire de la Côte d’AzurFaculdade de Ciências da Universidade de LisboaUniversity of BarcelonaMax Planck Institute for AstronomyNational Observatory of AthensUniversité de Paris-SaclayInstituto de Astrofísica de AndalucíaUniversité de Franche-ComtéINAF – Osservatorio Astronomico di RomaKatholieke Universiteit LeuvenRoyal Observatory of BelgiumSpace Research InstituteUniversité de RennesUniversity of AarhusKonkoly ObservatoryTartu ObservatoryHellenic Open UniversityARI, Zentrum für Astronomie der Universität HeidelbergCopernicus Astronomical CenterESAC, Villanueva de la CañadaAstronomical Observatory of TurinUniversité de BesançonCENTRA, Universidade de LisboaUniversité de NiceObservatoire de la Côte d'Azur, CNRSINAF – Osservatorio Astronomico di CataniaUniversit catholique de LouvainUniversit de ToulouseUniversit Libre de BruxellesINAF Osservatorio Astronomico di CapodimonteUniversit de LorraineAix-Marseille Universit",Universit de StrasbourgUniversit de LilleINAF Osservatorio Astrofisico di ArcetriINAF Osservatorio Astronomico di PadovaUniversit de MontpellierINAF Osservatorio di Astrofisica e Scienza dello Spazio di Bologna
The Gaia Galactic survey mission is designed and optimized to obtain astrometry, photometry, and spectroscopy of nearly two billion stars in our Galaxy. Yet as an all-sky multi-epoch survey, Gaia also observes several million extragalactic objects down to a magnitude of G~21 mag. Due to the nature of the Gaia onboard selection algorithms, these are mostly point-source-like objects. Using data provided by the satellite, we have identified quasar and galaxy candidates via supervised machine learning methods, and estimate their redshifts using the low resolution BP/RP spectra. We further characterise the surface brightness profiles of host galaxies of quasars and of galaxies from pre-defined input lists. Here we give an overview of the processing of extragalactic objects, describe the data products in Gaia DR3, and analyse their properties. Two integrated tables contain the main results for a high completeness, but low purity (50-70%), set of 6.6 million candidate quasars and 4.8 million candidate galaxies. We provide queries that select purer sub-samples of these containing 1.9 million probable quasars and 2.9 million probable galaxies (both 95% purity). We also use high quality BP/RP spectra of 43 thousand high probability quasars over the redshift range 0.05-4.36 to construct a composite quasar spectrum spanning restframe wavelengths from 72-100 nm.
We develop a minimal, timeless game-theoretic representation of the mass-geometry relation. An "Object" (mass) and "Space" (geometry) choose strategies in a static normal-form game; utilities encode stability as mutual consistency rather than dynamical payoffs. In a 2x2 toy model, the equilibria correspond to "light-flat" and "heavy-curved" configurations; a continuous variant clarifies when only trivial interior equilibria appear versus a continuum along a matching ray. Philosophically, the point is representational: a global description may be static while the experience of temporal flow for embedded observers arises from informational asymmetry, coarse-graining, and records. The framework separates time as parameter from relational constraint without committing to specific physical dynamics.
The goal of this article is to review developments regarding the use of ultra-cold atoms as quantum simulators. Special emphasis is placed on relativistic quantum phenomena, which are presumably most interesting for the audience of this journal. After a brief introduction into the main idea of quantum simulators and the basic physics of ultra-cold atoms, relativistic quantum phenomena of linear fields are discussed, including Hawking radiation, the Unruh effect, cosmological particle creation, the Gibbons-Hawking and Ginzburg effects, super-radiance, Sauter-Schwinger and Breit-Wheeler pair creation, as well as the dynamical Casimir effect. After that, the focus is shifted to phenomena of non-linear fields, such as the sine-Gordon model, the Kibble-Zurek mechanism, false-vacuum decay, and quantum back-reaction. In order to place everything into proper context, the basic underlying mechanisms of these phenomena are briefly recapitulated before their simulators are discussed. Even though effort is made to provide a review as fair as possible, there can be co claim of completeness and the selection as well as the relative weights of the topics may well reflect the personal view and taste of the author.
Multiple access communication systems enable numerous users to share common communication resources, playing a crucial role in wireless networks. With the emergence of the sixth generation (6G) and beyond communication networks, supporting massive machine-type communications with sporadic activity patterns is expected to become a critical challenge. Unsourced random access (URA) has emerged as a promising paradigm to address this challenge by decoupling user identification from data transmission through the use of a common codebook. This survey offers a comprehensive overview of URA solutions, encompassing both theoretical foundations and practical applications. We present a systematic classification of URA solutions across three primary channel models: Gaussian multiple access channels (GMACs), single-antenna fading channels, and multiple-input multiple-output (MIMO) fading channels. For each category, we analyze and compare state-of-the-art solutions in terms of performance, complexity, and practical feasibility. Additionally, we discuss critical challenges such as interference management, computational complexity, and synchronization. The survey concludes with promising future research directions and potential methods to address existing limitations, providing a roadmap for researchers and practitioners in this rapidly evolving field.
We present an approach for efficiently simulating strongly damped quantum systems subjected to periodic driving, employing a periodic matrix product operator representation of the influence functional. This representation enables the construction of a numerically exact Floquet propagator that captures the non-Markovian open system dynamics, thus providing a dissipative analogue to the Floquet Hamiltonian of driven isolated quantum systems. We apply this method to study the asymptotic heating of a reservoir in spin-boson models, characterizing the deviation from equilibrium conditions. Moreover, we show how a local driving of two qubits can be utilized to stabilize a transient entanglement buildup of the qubits originating from the interaction with a common environment. Our results make it possible to directly study both stationary and transient dynamics of strongly damped and driven quantum systems within a transparent theoretical and numerical framework.
Inspired by the program of discrete holography, we show that Jackiw-Teitelboim (JT) gravity on a hyperbolic tiling of Euclidean AdS2_2 gives rise to an Ising model on the dual lattice, subject to a topological constraint. The Ising model involves an asymptotic boundary condition with spins pointing opposite to the magnetic field. The topological constraint enforces a single domain wall between the spins of opposite direction, with the topology of a circle. The resolvent of JT gravity is related to the free energy of this Ising model, and the classical limit of JT gravity corresponds to the Ising low-temperature limit. We study this Ising model through a Monte Carlo approach and a mean-field approximation. For finite truncations of the infinite hyperbolic lattice, the map between both theories is only valid in a regime in which the domain wall has a finite size. For the extremal cases of large positive or negative coupling, the domain wall either shrinks to zero or touches the boundary of the lattice. This behavior is confirmed by the mean-field analysis. We expect that our results may be used as a starting point for establishing a holographic matrix model duality for discretized gravity.
As humans advance toward a higher level of artificial intelligence, it is always at the cost of escalating computational resource consumption, which requires developing novel solutions to meet the exponential growth of AI computing demand. Neuromorphic hardware takes inspiration from how the brain processes information and promises energy-efficient computing of AI workloads. Despite its potential, neuromorphic hardware has not found its way into commercial AI data centers. In this article, we try to analyze the underlying reasons for this and derive requirements and guidelines to promote neuromorphic systems for efficient and sustainable cloud computing: We first review currently available neuromorphic hardware systems and collect examples where neuromorphic solutions excel conventional AI processing on CPUs and GPUs. Next, we identify applications, models and algorithms which are commonly deployed in AI data centers as further directions for neuromorphic algorithms research. Last, we derive requirements and best practices for the hardware and software integration of neuromorphic systems into data centers. With this article, we hope to increase awareness of the challenges of integrating neuromorphic hardware into data centers and to guide the community to enable sustainable and energy-efficient AI at scale.
This paper describes a novel Python package, named causalgraph, for modeling and saving causal graphs embedded in knowledge graphs. The package has been designed to provide an interface between causal disciplines such as causal discovery and causal inference. With this package, users can create and save causal graphs and export the generated graphs for use in other graph-based packages. The main advantage of the proposed package is its ability to facilitate the linking of additional information and metadata to causal structures. In addition, the package offers a variety of functions for graph modeling and plotting, such as editing, adding, and deleting nodes and edges. It is also compatible with widely used graph data science libraries such as NetworkX and Tigramite and incorporates a specially developed causalgraph ontology in the background. This paper provides an overview of the package's main features, functionality, and usage examples, enabling the reader to use the package effectively in practice.
A hallmark of biological tissues, viewed as complex cellular materials, is the active generation of mechanical stresses by cellular processes, such as cell divisions. Each cellular event generates a force dipole that deforms the surrounding tissue. Therefore, a quantitative description of these force dipoles, and their consequences on tissue mechanics, is one of the central problems in understanding the overall tissue mechanics. In this work we analyze previously published experimental data on fruit fly \textit{D. melanogaster} wing epithelia to quantitatively describe the deformation fields induced by a cell-scale force dipole. We find that the measured deformation field can be explained by a simple model of fly epithelium as a linearly elastic sheet. This fact allows us to use measurements of the strain field around cellular events, such as cell divisions, to infer the magnitude and dynamics of the mechanical forces they generate. In particular, we find that cell divisions exert a transient isotropic force dipole field, corresponding to the temporary localisation of the cell nucleus to the tissue surface during the division, and traceless-symmetric force dipole field that remains detectable from the tissue strain field for up to about 3.53.5 hours after the division. This is the timescale on which elastic strains are erased by other mechanical processes and therefore it corresponds to the tissue fluidization timescale. In summary, we have developed a method to infer force dipoles induced by cell divisions, by observing the strain field in the surrounding tissues. Using this method we quantitatively characterize mechanical forces generated during a cell division, and their effects on the tissue mechanics.
Optical tactile sensors have recently become popular. They provide high spatial resolution, but struggle to offer fine temporal resolutions. To overcome this shortcoming, we study the idea of replacing the RGB camera with an event-based camera and introduce a new event-based optical tactile sensor called Evetac. Along with hardware design, we develop touch processing algorithms to process its measurements online at 1000 Hz. We devise an efficient algorithm to track the elastomer's deformation through the imprinted markers despite the sensor's sparse output. Benchmarking experiments demonstrate Evetac's capabilities of sensing vibrations up to 498 Hz, reconstructing shear forces, and significantly reducing data rates compared to RGB optical tactile sensors. Moreover, Evetac's output and the marker tracking provide meaningful features for learning data-driven slip detection and prediction models. The learned models form the basis for a robust and adaptive closed-loop grasp controller capable of handling a wide range of objects. We believe that fast and efficient event-based tactile sensors like Evetac will be essential for bringing human-like manipulation capabilities to robotics. The sensor design is open-sourced at this https URL .
Multimodal large language models (LLMs) have demonstrated impressive capabilities in generating high-quality images from textual instructions. However, their performance in generating scientific images--a critical application for accelerating scientific progress--remains underexplored. In this work, we address this gap by introducing ScImage, a benchmark designed to evaluate the multimodal capabilities of LLMs in generating scientific images from textual descriptions. ScImage assesses three key dimensions of understanding: spatial, numeric, and attribute comprehension, as well as their combinations, focusing on the relationships between scientific objects (e.g., squares, circles). We evaluate five models, GPT-4o, Llama, AutomaTikZ, Dall-E, and StableDiffusion, using two modes of output generation: code-based outputs (Python, TikZ) and direct raster image generation. Additionally, we examine four different input languages: English, German, Farsi, and Chinese. Our evaluation, conducted with 11 scientists across three criteria (correctness, relevance, and scientific accuracy), reveals that while GPT-4o produces outputs of decent quality for simpler prompts involving individual dimensions such as spatial, numeric, or attribute understanding in isolation, all models face challenges in this task, especially for more complex prompts.
We explore the electronic structure of paramagnetic CrSBr by comparative first principles calculations and angle-resolved photoemission spectroscopy. We theoretically approximate the paramagnetic phase using a supercell hosting spin configurations with broken long-range order and applying quasiparticle self-consistent GWGW theory, without and with the inclusion of excitonic vertex corrections to the screened Coulomb interaction (QSGWGW and QSGW^G\hat{W}, respectively). Comparing the quasi-particle band structure calculations to angle-resolved photoemission data collected at 200 K results in excellent agreement. This allows us to qualitatively explain the significant broadening of some bands as arising from the broken magnetic long-range order and/or electronic dispersion perpendicular to the quasi two-dimensional layers of the crystal structure. The experimental band gap at 200 K is found to be at least 1.51 eV at 200 K. At lower temperature, no photoemission data can be collected as a result of charging effects, pointing towards a significantly larger gap, which is consistent with the calculated band gap of \approx 2.1 eV.
We introduce a comprehensive approach to enhance the security, privacy, and sensing capabilities of integrated sensing and communications (ISAC) systems by leveraging random frequency agility (RFA) and random pulse repetition interval (PRI) agility (RPA) techniques. The combination of these techniques, which we refer to collectively as random frequency and PRI agility (RFPA), with channel reciprocity-based key generation (CRKG) obfuscates both Doppler frequency and PRIs, significantly hindering the chances that passive adversaries can successfully estimate radar parameters. In addition, a hybrid information embedding method integrating amplitude shift keying (ASK), phase shift keying (PSK), index modulation (IM), and spatial modulation (SM) is incorporated to increase the achievable bit rate of the system significantly. Next, a sparse-matched filter receiver design is proposed to efficiently decode the embedded information with a low bit error rate (BER). Finally, a novel RFPA-based secret generation scheme using CRKG ensures secure code creation without a coordinating authority. The improved range and velocity estimation and reduced clutter effects achieved with the method are demonstrated via the evaluation of the ambiguity function (AF) of the proposed waveforms.
There are no more papers matching your filters at the moment.