Universität Bremen
We consider axially symmetric, rotating boson stars. Their flat space limits represent spinning Q-balls. We discuss their properties and determine their domain of existence. Q-balls and boson stars are stationary solutions and exist only in a limited frequency range. The coupling to gravity gives rise to a spiral-like frequency dependence of the boson stars. We address the flat space limit and the limit of strong gravitational coupling. For comparison we also determine the properties of spherically symmetric Q-balls and boson stars.
Probabilistic circuits (PCs) are powerful probabilistic models that enable exact and tractable inference, making them highly suitable for probabilistic reasoning and inference tasks. While dominant in neural networks, representation learning with PCs remains underexplored, with prior approaches relying on external neural embeddings or activation-based encodings. To address this gap, we introduce autoencoding probabilistic circuits (APCs), a novel framework leveraging the tractability of PCs to model probabilistic embeddings explicitly. APCs extend PCs by jointly modeling data and embeddings, obtaining embedding representations through tractable probabilistic inference. The PC encoder allows the framework to natively handle arbitrary missing data and is seamlessly integrated with a neural decoder in a hybrid, end-to-end trainable architecture enabled by differentiable sampling. Our empirical evaluation demonstrates that APCs outperform existing PC-based autoencoding methods in reconstruction quality, generate embeddings competitive with, and exhibit superior robustness in handling missing data compared to neural autoencoders. These results highlight APCs as a powerful and flexible representation learning method that exploits the probabilistic inference capabilities of PCs, showing promising directions for robust inference, out-of-distribution detection, and knowledge distillation.
In most practical situations, the compression or transmission of images and videos creates distortions that will eventually be perceived by a human observer. Vice versa, image and video restoration techniques, such as inpainting or denoising, aim to enhance the quality of experience of human viewers. Correctly assessing the similarity between an image and an undistorted reference image as subjectively experienced by a human viewer can thus lead to significant improvements in any transmission, compression, or restoration system. This paper introduces the Haar wavelet-based perceptual similarity index (HaarPSI), a novel and computationally inexpensive similarity measure for full reference image quality assessment. The HaarPSI utilizes the coefficients obtained from a Haar wavelet decomposition to assess local similarities between two images, as well as the relative importance of image areas. The consistency of the HaarPSI with the human quality of experience was validated on four large benchmark databases containing thousands of differently distorted images. On these databases, the HaarPSI achieves higher correlations with human opinion scores than state-of-the-art full reference similarity measures like the structural similarity index (SSIM), the feature similarity index (FSIM), and the visual saliency-based index (VSI). Along with the simple computational structure and the short execution time, these experimental results suggest a high applicability of the HaarPSI in real world tasks.
We study the Besov regularity of wavelet series on Rd\mathbb{R}^d with randomly chosen coefficients. More precisely, each coefficient is a product of a random factor and a parameterized deterministic factor (decaying with the scale jj and the norm of the shift mm). Compared to the literature, we impose relatively mild conditions on the moments of the random variables in order to characterize the almost sure convergence of the wavelet series in Besov spaces Bp,qs(Rd)B^s_{p,q}(\mathbb{R}^d) and the finiteness of the moments as well as of the moment generating function of the Besov norm. In most cases, we achieve a complete characterization, i.e., the derived conditions are both necessary and sufficient.
Research software has become a central asset in academic research. It optimizes existing and enables new research methods, implements and embeds research knowledge, and constitutes an essential research product in itself. Research software must be sustainable in order to understand, replicate, reproduce, and build upon existing research or conduct new research effectively. In other words, software must be available, discoverable, usable, and adaptable to new needs, both now and in the future. Research software therefore requires an environment that supports sustainability. Hence, a change is needed in the way research software development and maintenance are currently motivated, incentivized, funded, structurally and infrastructurally supported, and legally treated. Failing to do so will threaten the quality and validity of research. In this paper, we identify challenges for research software sustainability in Germany and beyond, in terms of motivation, selection, research software engineering personnel, funding, infrastructure, and legal aspects. Besides researchers, we specifically address political and academic decision-makers to increase awareness of the importance and needs of sustainable research software practices. In particular, we recommend strategies and measures to create an environment for sustainable research software, with the ultimate goal to ensure that software-driven research is valid, reproducible and sustainable, and that software is recognized as a first class citizen in research. This paper is the outcome of two workshops run in Germany in 2019, at deRSE19 - the first International Conference of Research Software Engineers in Germany - and a dedicated DFG-supported follow-up workshop in Berlin.
Spin models of markets inspired by physics models of magnetism, as the Ising model, allow for the study of the collective dynamics of interacting agents in a market. The number of possible states has been mostly limited to two (buy or sell) or three options. However, herding effects of competing stocks and the collective dynamics of a whole market may escape our reach in the simplest models. Here I study a q-spin Potts model version of a simple Ising market model to represent the dynamics of a stock market index in a spin model. As a result, a self-organized gain-loss asymmetry in the time series of an index variable composed of stocks in this market is observed.
Hyperedge replacement (HR) grammars can generate NP-complete graph languages, which makes parsing hard even for fixed HR languages. Therefore, we study predictive shift-reduce (PSR) parsing that yields efficient parsers for a subclass of HR grammars, by generalizing the concepts of SLR(1) string parsing to graphs. We formalize the construction of PSR parsers and show that it is correct. PSR parsers run in linear space and time, and are more efficient than the predictive top-down (PTD) parsers recently developed by the authors.
We describe a variational approach to solving Anderson impurity models by means of exact diagonalization. Optimized parameters of a discretized auxiliary model are obtained on the basis of the Peierls-Feynman-Bogoliubov principle. Thereby, the variational approach resolves ambiguities related with the bath discretization, which is generally necessary to make Anderson impurity models tractable by exact diagonalization. The choice of variational degrees of freedom made here allows systematic improvements of total energies over mean field decouplings like Hartree-Fock. Furthermore, our approach allows us to embed arbitrary bath discretization schemes in total energy calculations and to systematically optimize and improve on traditional routes to the discretization problem such as fitting of hybridization functions on Matsubara frequencies. Benchmarks in terms of a single orbital Anderson model demonstrate that the variational exact diagonalization method accurately reproduces free energies as well as several single- and two-particle observables obtained from an exact solution. Finally, we demonstrate the applicability of the variational exact diagonalization approach to realistic five orbital problems with the example system of Co impurities in bulk Cu and compare to continuous-time Monte Carlo calculations. The accuracy of established bath discretization schemes is assessed in the framework of the variational approach introduced here.
Semiconductor membranes find their widespread use in various research fields targeting medical, biological, environmental, and optical applications. Often such membranes derive their functionality from an inherent nanopatterning, which renders the determination of their, e.g., optical, electronic, mechanical, and thermal properties a challenging task. In this work we demonstrate the non-invasive, all-optical thermal characterization of around 800-nm-thick and 150-μ\mum-wide membranes that consist of wurtzite GaN and a stack of In0.15_{0.15}Ga0.85_{0.85}N quantum wells as a built-in light source. Due to their application in photonics such membranes are bright light emitters, which challenges their non-invasive thermal characterization by only optical means. As a solution, we combine two-laser Raman thermometry with (time-resolved) photoluminescence measurements to extract the in-plane (i.e., cc-plane) thermal conductivity κin-plane\kappa_{\text{in-plane}} of our membranes. Based on this approach, we can disentangle the entire laser-induced power balance during our thermal analysis, meaning that all fractions of reflected, scattered, transmitted, and reemitted light are considered. As a result of our thermal imaging via Raman spectroscopy, we obtain κin-plane=16514+16\kappa_{\text{in-plane}}\,=\,165^{+16}_{-14}\,Wm1^{-1}K1^{-1} for our best membrane, which compares well to our simulations yielding κin-plane=177\kappa_{\text{in-plane}}\,=\,177\,Wm1^{-1}K1^{-1} based on an ab initio solution of the linearized phonon Boltzmann transport equation. Our work presents a promising pathway towards thermal imaging at cryogenic temperatures, e.g., when aiming to elucidate experimentally different phonon transport regimes via the recording of non-Fourier temperature distributions.
Stand-up motions are an indispensable part of humanoid robot soccer. A robot incapable of standing up by itself is removed from the game for some time. In this paper, we present our stand-up motions for the NAO robot. Our approach dates back to 2019 and has been evaluated and slightly expanded over the past six years. We claim that the main reason for failed stand-up attempts are large errors in the executed joint positions. By addressing such problems by either executing special motions to free up stuck limbs such as the arms, or by compensating large errors with other joints, we significantly increased the overall success rate of our stand-up routine. The motions presented in this paper are also used by several other teams in the Standard Platform League, which thereby achieve similar success rates, as shown in an analysis of videos from multiple tournaments.
Turbulence is the major cause of friction losses in transport processes and it is responsible for a drastic drag increase in flows over bounding surfaces. While much effort is invested into developing ways to control and reduce turbulence intensities, so far no methods exist to altogether eliminate turbulence if velocities are sufficiently large. We demonstrate for pipe flow that appropriate distortions to the velocity profile lead to a complete collapse of turbulence and subsequently friction losses are reduced by as much as 95%. Counterintuitively, the return to laminar motion is accomplished by initially increasing turbulence intensities or by transiently amplifying wall shear. The usual measures of turbulence levels, such as the Reynolds number (Re) or shear stresses, do not account for the subsequent relaminarization. Instead an amplification mechanism measuring the interaction between eddies and the mean shear is found to set a threshold below which turbulence is suppressed beyond recovery.
The Ehrhart polynomial of a lattice polytope PP encodes information about the number of integer lattice points in positive integral dilates of PP. The hh^\ast-polynomial of PP is the numerator polynomial of the generating function of its Ehrhart polynomial. A zonotope is any projection of a higher dimensional cube. We give a combinatorial description of the hh^\ast-polynomial of a lattice zonotope in terms of refined descent statistics of permutations and prove that the hh^\ast-polynomial of every lattice zonotope has only real roots and therefore unimodal coefficients. Furthermore, we present a closed formula for the hh^\ast-polynomial of a zonotope in matroidal terms which is analogous to a result by Stanley (1991) on the Ehrhart polynomial. Our results hold not only for hh^\ast-polynomials but carry over to general combinatorial positive valuations. Moreover, we give a complete description of the convex hull of all hh^\ast-polynomials of zonotopes in a given dimension: it is a simplicial cone spanned by refined Eulerian polynomials.
This set of notes re-proves known results on weighted automata (over a field, also known as multiplicity automata). The text offers a unified view on theorems and proofs that have appeared in the literature over decades and were written in different styles and contexts. None of the results reported here are claimed to be new. The content centres around fundamentals of equivalence and minimization, with an emphasis on algorithmic aspects. The presentation is minimalistic. No attempt has been made to motivate the material. Weighted automata are viewed from a linear-algebra angle. As a consequence, the proofs, which are meant to be succinct, but complete and almost self-contained, rely mainly on elementary linear algebra.
This paper is concerned with quasilinear parabolic reaction-diffusion-advection systems on extended domains. Frameworks for well-posedness in Hilbert spaces and spaces of continuous functions are presented, based on known results using maximal regularity. It is shown that spectra of travelling waves on the line are meaningfully given by the familiar tools for semilinear equations, such as dispersion relations, and basic connections of spectra to stability and instability are considered. In particular, a principle of linearized orbital instability for manifolds of equilibria is proven. Our goal is to provide easy access for applicants to these rigorous aspects. As a guiding example the Gray-Scott-Klausmeier model for vegetation-water interaction is considered in detail.
Real-world quantum applications, eg. on-chip quantum networks and quantum cryptography, necessitate large scale integrated single-photon sources with nanoscale footprint for modern information technology. While on-demand and high fidelity implantation of atomic scale single-photon sources in conventional 3D materials suffer from uncertainties due to the crystals dimensionality, layered 2D materials can host point-like centers with inherent confinement to a sub-nm plane. However, previous attempts to truly deterministically control spatial position and spectral homogeneity while maintaining the 2D character have not been realized. Here, we demonstrate the on-demand creation and precise positioning of single-photon sources in atomically thin MoS2 with very narrow ensemble broadening and near-unity fabrication yield. Focused ion beam irradiation creates 100s to 1000s of mono-typical atomistic defects with anti-bunched emission lines with sub-10 nm lateral and 0.7 nm axial positioning accuracy. Our results firmly establish 2D materials as a scalable platform for single-photon emitters with unprecedented control of position as well as photophysical properties owing to the all-interfacial nature.
We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof of the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of three-dimensional Henon-like diffeomorphisms.
Most ethical work is done at a low level of formality. This makes practical moral questions inaccessible to formal and natural sciences and can lead to misunderstandings in ethical discussion. In this paper, we use Bayesian inference to introduce a formalization of preference utilitarianism in physical world models, specifically cellular automata. Even though our formalization is not immediately applicable, it is a first step in providing ethics and ultimately the question of how to "make the world better" with a formal basis.
This is an introduction into John Conway's beautiful Combinatorial Game Theory, providing precise statements and detailed proofs for the fundamental parts of his theory. (1) Combinatorial game theory, (2) the GROUP of games, (3) the FIELD of numbers, (4) ordinal numbers, (5) games and numbers, (6) infinitesimal games, (7) impartial games.
We address the problem of executing tool-using manipulation skills in scenarios where the objects to be used may vary. We assume that point clouds of the tool and target object can be obtained, but no interpretation or further knowledge about these objects is provided. The system must interpret the point clouds and decide how to use the tool to complete a manipulation task with a target object; this means it must adjust motion trajectories appropriately to complete the task. We tackle three everyday manipulations: scraping material from a tool into a container, cutting, and scooping from a container. Our solution encodes these manipulation skills in a generic way, with parameters that can be filled in at run-time via queries to a robot perception module; the perception module abstracts the functional parts for the tool and extracts key parameters that are needed for the task. The approach is evaluated in simulation and with selected examples on a PR2 robot.
Carrier multiplication (CM), a photo-physical process to generate multiple electron-hole pairs by exploiting excess energy of free carriers, is explored for efficient photovoltaic conversion of photons from the blue solar band, predominantly wasted as heat in standard solar cells. Current state-of-the-art approaches with nanomaterials have demonstrated improved CM but are not satisfactory due to high energy loss and inherent difficulties with carrier extraction. Here, we report ultra-efficient CM in van der Waals (vdW) layered materials that commences at the energy conservation limit and proceeds with nearly 100% conversion efficiency. A small threshold energy, as low as twice the bandgap, was achieved, marking an onset of quantum yield with enhanced carrier generation. Strong Coulomb interactions between electrons confined within vdW layers allow rapid electron-electron scattering to prevail over electron-phonon scattering. Additionally, the presence of electron pockets spread over momentum space could also contribute to the high CM efficiency. Combining with high conductivity and optimal bandgap, these superior CM characteristics identify vdW materials for third-generation solar cell.
There are no more papers matching your filters at the moment.