Université du Littoral Côte d’Opale
The rational base number system, introduced by Akiyama, Frougny, and Sakarovitch in 2008, is a generalization of the classical integer base number system. Within this framework two interesting families of infinite words emerge, called minimal and maximal words. We conjecture that every minimal and maximal word is normal over an appropriate subalphabet. To support this conjecture, we present extensive numerical experiments that examine the richness threshold and the discrepancy of these words. We also discuss the implications that the validity of our conjecture would have for several long-standing open problems, including the existence of ZZ-numbers (Mahler, 1968) and Zp/qZ_{p/q}-numbers (Flatto, 1992), the existence of triple expansions in rational base p/qp/q (Akiyama, 2008), and the Collatz-inspired `4/3 problem' (Dubickas and Mossinghoff, 2009).
We study the square root bottleneck in the recovery of sparse vectors from quadratic equations. It is acknowledged that a sparse vector x0Rn \mathbf x_0\in \mathbb{R}^n, x00=k\| \mathbf x_0\|_0 = k can in theory be recovered from as few as O(k)O(k) generic quadratic equations but no polynomial time algorithm is known for this task unless m=Ω(k2)m = \Omega(k^2). This bottleneck was in fact shown in previous work to be essentially related to the initialization of descent algorithms. Starting such algorithms sufficiently close to the planted signal is known to imply convergence to this signal. In this paper, we show that as soon as mμ02kμ04m\gtrsim \mu_0^{-2}k \vee \mu_0^{-4} (up to log factors) where μ0=x0/x02\mu_0 = \| \mathbf x_0\|_\infty/\| \mathbf x_0\|_2, it is possible to recover a kk-sparse vector x0Rn \mathbf x_0\in \mathbb{R}^n from mm quadratic equations of the form Ai,xx=Ai,x0x0+εi\langle \mathbf A_i, \mathbf x \mathbf x^\intercal\rangle = \langle \mathbf A_i, \mathbf x_0 \mathbf x_0^\intercal\rangle + \varepsilon_i by minimizing the classical empirical loss. The proof idea carries over to the phase retrieval setting for which it provides an original initialization that matches the current optimal sample complexity (see e.g. [Cai 2023]). In the maximally incoherent regime μ02=k\mu_0^{-2}=k, and for m=o(k2)m=o(k^2) we provide evidence for topological hardness by showing that a property known as the Overlap Gap Property (OGP), which originated in spin glass theory and is conjectured to be indicative of algorithmic intractability when optimizing over random structures, holds for a particular level of overparametrization. The key ingredient of the proof is a lower bound on the tail of chi-squared random variables which follows from the theory of moderate deviations.
02 Mar 2022
Multilinear Discriminant Analysis (MDA) is a powerful dimension reduction method specifically formulated to deal with tensor data. Precisely, the goal of MDA is to find mode-specific projections that optimally separate tensor data from different classes. However, to solve this task, standard MDA methods use alternating optimization heuristics involving the computation of a succession of tensor-matrix products. Such approaches are most of the time difficult to solve and not natural, highligthing the difficulty to formulate this problem in fully tensor form. In this paper, we propose to solve multilinear discriminant analysis (MDA) by using the concept of transform domain (TD) recently proposed in \cite{Kilmer2011}. We show here that moving MDA to this specific transform domain make its resolution easier and more natural. More precisely, each frontal face of the transformed tensor is processed independently to build a separate optimization sub-problems easier to solve. Next, the obtained solutions are converted into projective tensors by inverse transform. By considering a large number of experiments, we show the effectiveness of our approach with respect to existing MDA methods.
14 May 2021
Face recognition and identification is a very important application in machine learning. Due to the increasing amount of available data, traditional approaches based on matricization and matrix PCA methods can be difficult to implement. Moreover, the tensorial approaches are a natural choice, due to the mere structure of the databases, for example in the case of color images. Nevertheless, even though various authors proposed factorization strategies for tensors, the size of the considered tensors can pose some serious issues. When only a few features are needed to construct the projection space, there is no need to compute a SVD on the whole data. Two versions of the tensor Golub-Kahan algorithm are considered in this manuscript, as an alternative to the classical use of the tensor SVD which is based on truncated strategies. In this paper, we consider the Tensor Tubal Golub Kahan Principal Component Analysis method which purpose is to extract the main features of images using the tensor singular value decomposition (SVD) based on the tensor cosine product that uses the discrete cosine transform. This approach is applied for classification and face recognition and numerical tests show its effectiveness.
23 Oct 2017
The reduction of a matrix to an upper JJ-Hessenberg form is a crucial step in the SRSR-algorithm (which is a QRQR-like algorithm), structure-preserving, for computing eigenvalues and vectors, for a class of structured matrices. This reduction may be handled via the algorithm JHESS or via the recent algorithm JHMSH and its variants. The main drawback of JHESS (or JHMSH) is that it may suffer from a fatal breakdown, causing a brutal stop of the computations and hence, the SRSR-algorithm does not run. JHESS may also encounter near-breakdowns, source of serious numerical instability. In this paper, we focus on these aspects. We first bring light on the necessary and sufficient condition for the existence of the SRSR-decomposition, which is intimately linked to JJ-Hessenberg reduction. Then we will derive a strategy for curing fatal breakdowns and also for treating near breakdowns. Hence, the JJ-Hessenberg form may be obtained. Numerical experiments are given, demonstrating the efficiency of our strategies to cure and treat breakdowns or near breakdowns.
The time-ordered exponential is defined as the function that solves a system of coupled first-order linear differential equations with generally non-constant coefficients. In spite of being at the heart of much system dynamics, control theory, and model reduction problems, the time-ordered exponential function remains elusively difficult to evaluate. The *-Lanczos algorithm is a (symbolic) algorithm capable of evaluating it by producing a tridiagonalization of the original differential system. In this paper, we explain how the *-Lanczos algorithm is built from a generalization of Krylov subspaces, and we prove crucial properties, such as the matching moment property. A strategy for its numerical implementation is also outlined and will be subject of future investigation.
16 Feb 2025
In this paper, we introduce a higher order approach for dimension reduction based on the Trace Ratio problem. We show the existence and uniqueness of the solution, and we provide a relationship between the Trace Ratio problem and the Ratio Trace problem. We also propose a new algorithm to solve the Trace Ratio problem. We apply the approach to generalize the Linear Discriminant Analysis (LDA) to higher order tensors. We provide some numerical experiments to illustrate the efficiency of the proposed method. The method is based on the Einstein product, and it is a generalization of the state-of-the-art trace based DR methods to higher order tensors. The superiority of the Tensor-based methods have been shown experimentally, which motivates us to extend the state-of-the-art Ratio Trace based DR methods to higher order tensors via the Einstein product.
A Com-PreLie bialgebra is a commutative bialgebra with an extra preLie product satisfying some compatibilities with the product and coproduct. We here give a classification of connected, cocommutative Com-PreLie bialgebras over a field of characteristic zero: we obtain a main family of symmetric algebras on a space V of any dimension, and another family available only if V is one-dimensional. We also explore the case of Com-PreLie bialgebras over a group algebra and over a tensor product of a group algebra and of a symmetric algebra.
We demonstrate the advantages of THz frequency combs for high-resolution spectroscopy. This benefits from wide spectral coverage and the exact knowledge of the frequency position of each comb component. Heterodyne detection combined with a fast Fourier spectrometer enables rapid and simultaneous measurement of more than 80 frequency comb modes covering a 7.5 GHz bandwidth. A spectrum is obtained in under 20 minutes yielding a uniform resolution of 70 kHz. This new setup has been validated by recording more than 150 lines of methanol around 723 GHz, and represents a new solution to exploit THz frequency combs for high-resolution spectroscopy.
The complexity of an infinite word can be measured in several ways, the two most common measures being the subword complexity and the abelian complexity. In 2015, Rigo and Salimov introduced a family of intermediate complexities indexed by kN>0k\in\mathbb{N}_{>0}: the kk-binomial complexities. These complexities scale up from the abelian complexity, with which the 11-binomial complexity coincides, to the subword complexity, to which they converge pointwise as kk tends to \infty. In this article, we provide four classes of dd-ary infinite words -- namely, dd-ary 11-balanced words, words with subword complexity nN>0n+(d1)n\in\mathbb{N}_{>0}\mapsto n+(d-1) (which form a subclass of quasi-Sturmian words), hypercubic billiard words, and words obtained by coloring a Sturmian word with another Sturmian word -- for which this scale ``collapses'', that is, for which all kk-binomial complexities, for k2k\geq 2, coincide with the subword complexity. This work generalizes a result of Rigo and Salimov, established in their seminal 2015 paper, which asserts that the kk-binomial complexity of any Sturmian word coincides with its subword complexity whenever k2k\geq 2.
Typed decorated trees are used by Bruned, Hairer and Zambotti to give a description of a renormalisation processon stochastic PDEs. We here study the algebraic structures on these objects: multiple prelie algebrasand related operads (generalizing a result by Chapoton and Livernet), noncommutative and cocommutative Hopf algebras (generalizing Grossman and Larson's construction),commutative and noncocommutative Hopf algebras (generalizing Connes and Kreimer's construction),bialgebras in cointeraction (generalizing Calaque, Ebrahimi-Fard and Manchon's result). We also define families of morphisms and in particular we prove that any Connes-Kreimer Hopf algebraof typed and decorated trees is isomorphic to a Connes-Kreimer Hopf algebra of non--typed and decoratedtrees (the set of decorations of vertices being bigger), through a contraction process,and finally obtain the Bruned-Hairer-Zambotti construction as a subquotient.
This work, which may be seen as a companion paper to \cite{RS2}, handles the way the intersection points made by the diagonals of a regular polygon are distributed. It was stated recently by the authors that these points lie exclusively on circles centered on the origin and also the way their respective radii depend on the four indices of the vertices of the initial regular nn-gon which characterize the two straight lines underlying the intersection points. Because these four vertices are located at preset positions on the the regular nn-gon inscribed in the unit circle whose path-length perimeter is constant, it allows the orbits to be characterized by 3 parameters instead of 4, describing roughly the lengths of the paths between the first three vertices, whether the quadrilateral described by these four vertices is simple or complex. This approach enables us to deal with the orbits generated by the clique-arrangement, and to handle their cardinalities as well as the multiplicities of the associated intersection points. A reliable counting-algorithm based on this triplet strategy is provided in order to enumerate the intersection points without generating the associated graph. The orbits being simulated, we call this method \textit{Simuorb}. The procedure is robust, fast and allows a comprehensive understanding of what is happening in a clique-arrangement, whether it contains a large number of points or not.
In this paper, a new progressive mesh algorithm is introduced in order to perform fast physical simulations by the use of a lattice Boltzmann method (LBM) on a single-node multi-GPU architecture. This algorithm is able to mesh automatically the simulation domain according to the propagation of fluids. This method can also be useful in order to perform various types of simulations on complex geometries. The use of this algorithm combined with the massive parallelism of GPUs allows to obtain very good performance in comparison with the static mesh method used in literature. Several simulations are shown in order to evaluate the algorithm.
High resolution rotational Terahertz (THz) spectroscopy has been widely applied to the studies of numerous polar gas phase molecules, in particular volatile organic compounds (VOCs). During the storage of foodstuffs packed under a protective atmosphere, microbial activity will lead to the generation of a complex mixture of trace gases that could be used as food spoilage indicators. Here we have demonstrated that the THz instrumentation presently available provides sufficient sensitivity and selectivity to monitor the generation of hydrogen sulfide (H2S) in the headspace of packed Atlantic salmon (Salmo salar) fillet portions. A comprehensive comparison was made by selective-ion flow-tube mass spectrometry (SIFT-MS) in order to validate the THz measurements and protocol. The detectivity of a range of alternative compounds for this application is also provided, based on the experimental detection limit observed and molecular spectroscopic properties. Molecules like ethanol, methyl mercaptan and ammonia are suitable indicators with the presently available sensitivity levels, while dimethyl sulfide, acetone and butanone may be considered with a sensitivity improvement of 2 orders of magnitude.
This study explores the economic value of Aleppo pine forests, a unique and threatened ecosystem in the border region of central Tunisia. These forests play a vital role in supporting small rural communities, but face increasing pressures and restrictions on their use. This research aims to assign a monetary value to forest conservation, considering the region's specific socio-economic context. Strategies for empowering local residents as key actors in developing sustainable cross-border initiatives are further investigated. Employing the contingent valuation method, a survey of 350 local residents and international users was conducted to assess their willigness to pay fo forest conservation efforts. Logistic regression analysis revealed that sociodemographic factors, such as monthly income and preferred payment method, significantly influence both and the likehood of participation. These findingd highlight the feasibility and importance of reconciling economic development with ecological sustainability in this critical region.
We study quadri-algebras and dual quadri-algebras. We describe the free quadri-algebra on one generator as a subobject of the Hopf algebra of permutations FQSym, proving a conjecture due to Aguiar and Loday, using that the operad of quadri-algebras can be obtained from the operad of dendriform algebras by both black and white Manin products. We also give a combinatorial description of free dual quadri-algebras. A notion of quadri-bialgebra is also introduced, with applications to the Hopf algebras FQSym and WQSym.
17 Oct 2025
Whispering gallery mode (WGM) microcavities feature ultrahigh Q-factors and small mode volumes, offering strong light-matter interactions for sensing applications. However, unmodified surfaces are weakly responsive togas-phase refractive index changes, limiting trace gas detection. In this work, we propose a novel dissipative sensing scheme based on a non-functionalized WGM microcavity and experimentally demonstrate its feasibility and performance through trace-level CO2 detection. Unlike conventional dispersive sensing that tracks resonance frequency shifts, our approach leverages enhanced local fields and thermal effects at the coupling region to convert weak absorption into measurable variations in resonance depth. A modulation-demodulation method suppresses low-frequency noise, with parameters optimized experimentally. The sensor achieves quantitative detection over 1.5-400 ppm (R2 > 0.99), ~1.13% accuracy, a minimum detection limit of 0.12 ppb at 424 s integration-five orders of magnitude better than dispersive WGM sensors. Long term CO2 monitoring further demonstrates stability and resistance to environmental perturbations. Compared with state-of-the-art cavity-enhanced and photoacoustic sensors, our system delivers at least an order of magnitude lower baseline detection while maintaining compact size, ambient pressure operation, and low cost, highlighting its potential for scalable, real-world deployment.
We establish and explore a relationship between two approaches to moment-cumulant relations in free probability theory: on one side the main approach, due to Speicher, given in terms of Möbius inversion on the lattice of noncrossing partitions, and on the other side the more recent non-commutative shuffle-algebra approach, where the moment-cumulant relations take the form of certain exponential-logarithm relations. We achieve this by exhibiting two operad structures on (noncrossing) partitions, different in nature: one is an ordinary, non-symmetric operad whose composition law is given by insertion into gaps between elements, the other is a coloured, symmetric operad with composition law expressing refinement of blocks. We show that these operad structures interact so as to make the corresponding incidence bialgebra of the former a comodule bialgebra for the latter. Furthermore, this interaction is compatible with the shuffle structure and thus unveils how the two approaches are intertwined. Moreover, the constructions and results are general enough to extend to ordinary set partitions.
29 Aug 2025
This work proposes a novel convex-non-convex formulation of the image segmentation and the image completion problems. The proposed approach is based on the minimization of a functional involving two distinct regularization terms: one promotes low-rank structure in the solution, while the other one enforces smoothness. To solve the resulting optimization problem, we employ the alternating direction method of multipliers (ADMM). A detailed convergence analysis of the algorithm is provided, and the performance of the methods is demonstrated through a series of numerical experiments.
14 Feb 2025
Likelihood inference for max-stable random fields is in general impossible because their finite-dimensional probability density functions are unknown or cannot be computed efficiently. The weighted composite likelihood approach that utilizes lower dimensional marginal likelihoods (typically pairs or triples of sites that are not too distant) is rather favored. In this paper, we consider the family of spatial max-stable Brown-Resnick random fields associated with isotropic fractional Brownian fields. We assume that the sites are given by only one realization of a homogeneous Poisson point process restricted to C=(1/2,1/2]2\mathbf{C}=(-1/2,1/2]^{2} and that the random field is observed at these sites. As the intensity increases, we study the asymptotic properties of the composite likelihood estimators of the scale and Hurst parameters of the fractional Brownian fields using different weighting strategies: we exclude either pairs that are not edges of the Delaunay triangulation or triples that are not vertices of triangles.
There are no more papers matching your filters at the moment.