Institute of Astronomy and NAOBulgarian Academy of Sciences
Exact single-time and two-time correlations and the two-time response function are found for the order-parameter in the voter model with nearest-neighbour interactions. Their explicit dynamical scaling functions are shown to be continuous functions of the space dimension d>0d>0. Their form reproduces the predictions of non-equilibrium representations of the Schrödinger algebra for models with dynamical exponent \mathpzcz=2\mathpzc{z}=2 and with the dominant noise-source coming from the heat bath. Hence the ageing in the voter model is a paradigm for relaxations in non-equilibrium critical dynamics, without detailed balance, and with the upper critical dimension d=2d^*=2.
This paper synthesizes current astrophysical understanding to provide a comprehensive framework for the origins and characteristics of exocomet reservoirs across diverse planetary systems. It integrates theories of star and planet formation with observational data, demonstrating that planetesimal reservoirs are likely widespread beyond the Solar System and outlining their evolutionary pathways.
Binary sequences are widely used in various practical fields, such as telecommunications, radar technology, navigation, cryptography, measurement sciences, biology or industry. In this paper, a method to generate long binary sequences (LBS) with low peak sidelobe level (PSL) value is proposed. Having an LBS with length nn, both the time and memory complexities of the proposed algorithm are O(n)\mathcal{O}(n). During our experiments, we repeatedly reach better PSL values than the currently known state of art constructions, such as Legendre sequences, with or without rotations, Rudin-Shapiro sequences or m-sequences, with or without rotations, by always reaching a record-breaking PSL values strictly less than n\sqrt{n}. Furthermore, the efficiency and simplicity of the proposed method are particularly beneficial to the lightweightness of the implementation, which allowed us to reach record-breaking PSL values for less than a second.
This survey aims to review two decades of progress on exponential functionals of (possibly killed) real-valued Lévy processes. Since the publication of the seminal survey by Bertoin and Yor, substantial advances have been made in understanding the structure and properties of these random variables. At the same time, numerous applications of these quantities have emerged across various different contexts of modern applied probability. Motivated by all this, in this manuscript, we provide a detailed overview of these developments, beginning with a discussion of the class of special functions that have played a central role in recent progress, and then organising the main results on exponential functionals into thematic groups. Moreover, we complement several of these results and set them within a unified framework. Throughout, we strive to offer a coherent historical account of each contribution, highlighting both the probabilistic and analytical techniques that have driven the advances in the field.
The paper surveys recent progress in the search for an appropriate internal space algebra for the Standard Model (SM) of particle physics. As a starting point serve Clifford algebras involving operators of left multiplication by octonions. A central role is played by a distinguished complex structure which implements the splitting of the octonions O=CC3{\mathbb O} = {\mathbb C} \oplus {\mathbb C}^3 reflecting the lepton-quark symmetry. Such a complex structure in C10C\ell_{10} is generated by the C6(C8C10)C\ell_6(\subset C\ell_8\subset C\ell_{10}) volume form, ω6=γ1γ6\omega_6 = \gamma_1 \cdots \gamma_6, left invariant by the Pati-Salam subgroup of Spin(10)Spin(10), GPS=Spin(4)×Spin(6)/Z2G_{\rm PS} = Spin (4) \times Spin (6) / {\mathbb Z}_2. While the Spin(10)Spin(10) invariant volume form ω10=γ1...γ10\omega_{10}=\gamma_1 ... \gamma_{10} is known to split the Dirac spinors of C10C\ell_{10} into left and right chiral (semi)spinors, P=12(1iω6){\cal P} = \frac12 (1 - i\omega_6) is interpreted as the projector on the 16-dimensional \textit{particle subspace} (annihilating the antiparticles). The standard model gauge group appears as the subgroup of GPSG_{PS} that preserves the sterile neutrino (identified with the Fock vacuum). The Z2\mathbb{Z}_2-graded internal space algebra A\mathcal{A} is then included in the projected tensor product: APC10P=C4PC60P\mathcal{A}\subset \mathcal{P}C\ell_{10}\mathcal{P}=C\ell_4\otimes \mathcal{P} C\ell_6^0\mathcal{P}. The Higgs field appears as the scalar term of a superconnection, an element of the odd part, C41C\ell_4^1, of the first factor. The fact that the projection of C10C\ell_{10} only involves the even part C60C\ell_6^0 of the second factor guarantees that the colour symmetry remains unbroken. As an application we express the ratio mHmW\frac{m_H}{m_W} of the Higgs to the WW-boson masses in terms of the cosine of the {\it theoretical} Weinberg angle.
September 7, 2025 marked the 80th anniversary of the birth of Oleg Marichev. Marichev is known mathematician which has developed many of Mathematica's algorithms for the calculation of definite and indefinite integrals and hypergeometric functions including Meijer G-function.
In this paper, we find new scalarized black holes by coupling a scalar field with the Gauss-Bonnet invariant in Teleparallel gravity. The Teleparallel formulation of this theory uses torsion instead of curvature to describe the gravitational interaction and it turns out that, in this language, the usual Gauss-Bonnet term in four dimensions, decays in two distinct boundary terms, the Teleparallel Gauss-Bonnet invariants. Both can be coupled individually, or in any combination to a scalar field, to obtain a Teleparallel Gauss-Bonnet extension of the Teleparallel equivalent of general relativity. The theory we study contains the familiar Riemannian Einstein-Gauss-Bonnet gravity theory as a particular limit and offers a natural extension, in which scalarization is triggered by torsion and with new interesting phenomenology. We demonstrate numerically the existence of asymptotically flat scalarized black hole solutions and show that, depending on the choice of coupling of the boundary terms, they can have a distinct behaviour compared to the ones known from the usual Einstein-Gauss-Bonnet case. More specifically, non-monotonicity of the metric functions and the scalar field can be present, a feature that was not observed until now for static scalarized black hole solutions.
In holography, flavour probe branes are used to introduce fundamental matter to the AdS/CFT correspondence. At a technical level, the probes are described by extremizing the DBI action and solving the Lagrange-Euler equations of motion. I report on applications of artificial neural networks that allow direct minimization of the regularized DBI action (interpreted as a free energy) without the need to derive and solve the equations of motion. I consider, as examples, magnetic catalysis of chiral symmetry breaking and the meson melting phase transition in the D3/D7 holographic set-up. Finally, I provide a framework which allows the simultaneous learning of the embeddings and the relevant aspects of the dual geometry based on field theory data.
We investigate operators between spaces of holomorphic functions in several complex variables. Let G1,G2CnG_1, G_2 \subset \mathbb{C}^n be cylindrical domains. We construct a canonical map from the space of bounded linear operators L(H(G1),H(G2))\mathcal{L}(H(G_1), H(G_2)) to H(G1b×G2)H(G_1^b \times G_2) and prove that it is a topological isomorphism (Theorem~\ref{pierwsze twierdzenie}). We then establish uniform estimates for operators on bounded, complete nn-circled domains (Theorem~\ref{thm:4.8}) and show that sequences of operators on smaller domains satisfying suitable uniform bounds uniquely determine a global operator (Theorem~\ref{thm:4.9}). Together, these results provide a unified framework for representing and extending operators on spaces of holomorphic functions in several complex variables.
Quantum annealers manufactured by D-Wave Systems, Inc., are computational devices capable of finding high-quality solutions of NP-hard problems. In this contribution, we explore the potential and effectiveness of such quantum annealers for computing Boolean tensor networks. Tensors offer a natural way to model high-dimensional data commonplace in many scientific fields, and representing a binary tensor as a Boolean tensor network is the task of expressing a tensor containing categorical (i.e., {0, 1}) values as a product of low dimensional binary tensors. A Boolean tensor network is computed by Boolean tensor decomposition, and it is usually not exact. The aim of such decomposition is to minimize the given distance measure between the high-dimensional input tensor and the product of lower-dimensional (usually three-dimensional) tensors and matrices representing the tensor network. In this paper, we introduce and analyze three general algorithms for Boolean tensor networks: Tucker, Tensor Train, and Hierarchical Tucker networks. The computation of a Boolean tensor network is reduced to a sequence of Boolean matrix factorizations, which we show can be expressed as a quadratic unconstrained binary optimization problem suitable for solving on a quantum annealer. By using a novel method we introduce called \textit{parallel quantum annealing}, we demonstrate that tensor with up to millions of elements can be decomposed efficiently using a DWave 2000Q quantum annealer.
This research presents the TESS Ten Thousand Catalog, a compilation of 10,001 uniformly-vetted eclipsing binary stars identified from TESS Full-Frame Image data using a combination of machine learning and large-scale citizen science. The catalog includes 7,936 previously uncataloged systems, many of which are faint with median TESS magnitude 13.8, and provides updated ephemerides for 2,065 known EBs, significantly expanding the reliable population of these systems for astrophysical study.
An analytical approach is developed to the problem of computation of monotone Riemannian metrics (e.g. Bogoliubov-Kubo-Mori, Bures, Chernoff, etc.) on the set of quantum states. The obtained expressions originate from the Morozova, Chencov and Petz correspondence of monotone metrics to operator monotone functions. The used mathematical technique provides analytical expansions in terms of the thermodynamic mean values of iterated (nested) commutators of a model Hamiltonian T with the operator S involved through the control parameter hh. Due to the sum rules for the frequency moments of the dynamic structure factor new presentations for the monotone Riemannian metrics are obtained. Particularly, relations between any monotone Riemannian metric and the usual thermodynamic susceptibility or the variance of the operator SS are discussed. If the symmetry properties of the Hamiltonian are given in terms of generators of some Lie algebra, the obtained expansions may be evaluated in a closed form. These issues are tested on a class of model systems studied in condensed matter physics.
We present the results of full new calculation of radiocarbon 14C production in the Earth atmosphere, using a numerical Monte-Carlo model. We provide, for the first time, a tabulated 14C yield function for the energy of primary cosmic ray particles ranging from 0.1 to 1000 GeV/nucleon. We have calculated the global production rate of 14C, which is 1.64 and 1.88 atoms/cm2/s for the modern time and for the pre-industrial epoch, respectively. This is close to the values obtained from the carbon cycle reservoir inventory. We argue that earlier models overestimated the global 14C production rate because of outdated spectra of cosmic ray heavier nuclei. The mean contribution of solar energetic particles to the global 14C is calculated as about 0.25% for the modern epoch. Our model provides a new tool to calculate the 14C production in the Earth's atmosphere, which can be applied, e.g., to reconstructions of solar activity in the past.
Normed division rings are reviewed in the more general framework of composition algebras that include the split (indefinite metric) case. The Jordan - von Neumann - Wigner classification of finite dimensional Jordan algebras is outlined with special attention to the 27 dimensional exceptional Jordan algebra J. The automorphism group F_4 of J and its maximal Borel - de Siebenthal subgroups are studied in detail and applied to the classification of fundamental fermions and gauge bosons. Their intersection in F_4 is demonstrated to coincide with the gauge group of the Standard Model of particle physics. The first generation's fundamental fermions form a basis of primitive idempotents in the euclidean extension of the Jordan subalgebra JSpin_9 of J.
In this paper, we leverage image complexity as a prior for refining segmentation features to achieve accurate real-time semantic segmentation. The design philosophy is based on the observation that different pixel regions within an image exhibit varying levels of complexity, with higher complexities posing a greater challenge for accurate segmentation. We thus introduce image complexity as prior guidance and propose the Image Complexity prior-guided Feature Refinement Network (ICFRNet). This network aggregates both complexity and segmentation features to produce an attention map for refining segmentation features within an Image Complexity Guided Attention (ICGA) module. We optimize the network in terms of both segmentation and image complexity prediction tasks with a combined loss function. Experimental results on the Cityscapes and CamViD datasets have shown that our ICFRNet achieves higher accuracy with a competitive efficiency for real-time segmentation.
A concise story of the rise of the four fermion theory of the universal weak interaction and its experimental confirmation, with a special emphasis on the problems related to parity violation.
Large Language Models (LLMs) have demonstrated considerable success in open-book question answering (QA), where the task requires generating answers grounded in a provided external context. A critical challenge in open-book QA is to ensure that model responses are based on the provided context rather than its parametric knowledge, which can be outdated, incomplete, or incorrect. Existing evaluation methods, primarily based on the LLM-as-a-judge approach, face significant limitations, including biases, scalability issues, and dependence on costly external systems. To address these challenges, we propose a novel metric that contrasts the perplexity of the model response under two conditions: when the context is provided and when it is not. The resulting score quantifies the extent to which the model's answer relies on the provided context. The validity of this metric is demonstrated through a series of experiments that show its effectiveness in identifying whether a given answer is grounded in the provided context. Unlike existing approaches, this metric is computationally efficient, interpretable, and adaptable to various use cases, offering a scalable and practical solution to assess context utilization in open-book QA systems.
This paper introduces the concept of Fractal Frenet equations, a set of differential equations used to describe the behavior of vectors along fractal curves. The study explores the analogue of arc length for fractal curves, providing a measure to quantify their length. It also discusses fundamental mathematical constructs, such as the analogue of the unit tangent vector, which indicates the curve's direction at different points, and the analogue of curvature vector or fractal curvature vector, which characterizes its curvature at various locations. The concept of torsion, describing the twisting and turning of fractal curves in three-dimensional space, is also explored. Specific examples, like the fractal helix and the fractal snowflake, illustrate the application and significance of the Fractal Frenet equations.
The dramatic outbreak of the coronavirus disease 2019 (COVID-19) pandemics and its ongoing progression boosted the scientific community's interest in epidemic modeling and forecasting. The SIR (Susceptible-Infected-Removed) model is a simple mathematical model of epidemic outbreaks, yet for decades it evaded the efforts of the community to derive an explicit solution. The present work demonstrates that this is a non-trivial task. Notably, it is proven that the explicit solution of the model requires the introduction of a new transcendental special function, related to the Wright's Omega function. The present manuscript reports new analytical results and numerical routines suitable for parametric estimation of the SIR model. The manuscript introduces iterative algorithms approximating the incidence variable, which allows for estimation of the model parameters from the numbers of observed cases. The numerical approach is exemplified with data from the European Centre for Disease Prevention and Control (ECDC) for several European countries in the period Jan 2020 -- Jun 2020.
Stimulated Raman adiabatic passage (STIRAP), driven with pulses of optimum shape and delay has the potential of reaching fidelities high enough to make it suitable for fault-tolerant quantum information processing. The optimum pulse shapes are obtained upon reduction of STIRAP to effective two-state systems. We use the Dykhne-Davis-Pechukas (DDP) method to minimize nonadiabatic transitions and to maximize the fidelity of STIRAP. This results in a particular relation between the pulse shapes of the two fields driving the Raman process. The DDP-optimized version of STIRAP maintains its robustness against variations in the pulse intensities and durations, the single-photon detuning and possible losses from the intermediate state.
There are no more papers matching your filters at the moment.