Rio de Janeiro State University
Polymer-based plastics exhibit time-dependent deformation under constant stress, known as creep, which can lead to rupture or static fatigue. A common misconception is that materials under tolerable static loads remain unaffected over time. Accurate long-term deformation predictions require experimental creep data, but conventional models based on simple rheological elements like springs and dampers often fall short, lacking the flexibility to capture the power-law behaviour intrinsic to creep processes. The springpot, a fractional calculus-based element, has been used to provide a power-law relationship; however, its fixed-order nature limits its accuracy, particularly when the deformation rate evolves over time. This article introduces a variable-order (VO) springpot model that dynamically adapts to the evolving viscoelastic properties of polymeric materials during creep, capturing changes between glassy, transition and rubbery phases. Model parameters are calibrated using a robust procedure for model identification based on the cross-entropy (CE) method, resulting in physically consistent and accurate predictions. This advanced modelling framework not only overcomes the limitations of the fixed-order models but also establishes a foundation for applying VO mechanics to other viscoelastic materials, providing a valuable tool for predicting long-term material performance in structural applications.
Exploring the equation of state of dense matter is an essential part of interpreting the observable properties of neutron stars. We present here the first results for dense matter in the zero-temperature limit generated by the MUSES Calculation Engine, a composable workflow management system that orchestrates calculation and data processing stages comprising a collection of software modules designed within the MUSES framework. The modules presented in this work calculate equations of state using algorithms spanning three different theories/models: (1) Crust Density Functional Theory, valid starting at low densities, (2) Chiral Effective Field Theory, valid around saturation density, and (3) the Chiral Mean Field model, valid beyond saturation density. Lepton contributions are added through the Lepton module to each equation of state, ensuring charge neutrality and the possibility of β\beta-equilibrium. Using the Synthesis module, we match the three equations of state using different thermodynamic variables and different methods. We then couple the complete equation of state to a novel full-general-relativity solver (QLIMR) module that calculates neutron star properties. We find that the matching performed using different thermodynamic variables affects differently the range obtained for neutron star masses and radii (although never beyond a few percent difference). We also investigate the universality of equation of state-independent relations for our matched stars. Finally, for the first time, we use the Flavor Equilibration module to estimate bulk viscosity and flavor relaxation charge fraction and rates (at low temperature) for Chiral Effective Field Theory and the Chiral Mean Field model.
Transfer learning has become an essential tool in modern computer vision, allowing practitioners to leverage backbones, pretrained on large datasets, to train successful models from limited annotated data. Choosing the right backbone is crucial, especially for small datasets, since final performance depends heavily on the quality of the initial feature representations. While prior work has conducted benchmarks across various datasets to identify universal top-performing backbones, we demonstrate that backbone effectiveness is highly dataset-dependent, especially in low-data scenarios where no single backbone consistently excels. To overcome this limitation, we introduce dataset-specific backbone selection as a new research direction and investigate its practical viability in low-data regimes. Since exhaustive evaluation is computationally impractical for large backbone pools, we formalize Vision Backbone Efficient Selection (VIBES) as the problem of searching for high-performing backbones under computational constraints. We define the solution space, propose several heuristics, and demonstrate VIBES feasibility for low-data image classification by performing experiments on four diverse datasets. Our results show that even simple search strategies can find well-suited backbones within a pool of over 13001300 pretrained models, outperforming generic benchmark recommendations within just ten minutes of search time on a single GPU (NVIDIA RTX A5000).
This paper introduces CEopt (this https URL), a MATLAB tool leveraging the Cross-Entropy method for non-convex optimization. Due to the relative simplicity of the algorithm, it provides a kind of transparent ``gray-box'' optimization solver, with intuitive control parameters. Unique in its approach, CEopt effectively handles both equality and inequality constraints using an augmented Lagrangian method, offering robustness and scalability for moderately sized complex problems. Through select case studies, the package's applicability and effectiveness in various optimization scenarios are showcased, marking CEopt as a practical addition to optimization research and application toolsets.
Accurate prediction of remaining useful life (RUL) under creep conditions is crucial for the design and maintenance of industrial equipment operating at high temperatures. Traditional deterministic methods often overlook significant variability in experimental data, leading to unreliable predictions. This study introduces a probabilistic framework to address uncertainties in predicting creep rupture time. We utilize robust regression methods to minimize the influence of outliers and enhance model estimates. Sobol indices-based global sensitivity analysis identifies the most influential parameters, followed by Monte Carlo simulations to determine the probability distribution of the material's RUL. Model selection techniques, including the Akaike and Bayesian information criteria, ensure the optimal predictive model. This probabilistic approach allows for the delineation of safe operational limits with quantifiable confidence levels, thereby improving the reliability and safety of high-temperature applications. The framework's versatility also allows integration with various mathematical models, offering a comprehensive understanding of creep behavior.
Hairy black holes by gravitational decoupling (GD) are probed to derive the gravitational waveform produced by perturbation theory applied to these compact objects. Using the Regge-Wheeler and Zerilli equations governing the metric perturbations and applying a higher-order WKB method, the quasinormal modes (QNMs) are computed and discussed. Compared to the QNMs produced in the ringdown phase of Reissner-Nordström black hole solutions, it yields a clear physical signature of primary hair imprinting the hairy GD black hole gravitational waveforms.
A common defect found when reproducing old vinyl and gramophone recordings with mechanical devices are the long pulses with significant low-frequency content caused by the interaction of the arm-needle system with deep scratches or even breakages on the media surface. Previous approaches to their suppression on digital counterparts of the recordings depend on a prior estimation of the pulse location, usually performed via heuristic methods. This paper proposes a novel Bayesian approach capable of jointly estimating the pulse location; interpolating the almost annihilated signal underlying the strong discontinuity that initiates the pulse; and also estimating the long pulse tail by a simple Gaussian Process, allowing its suppression from the corrupted signal. The posterior distribution for the model parameters as well for the pulse is explored via Markov-Chain Monte Carlo (MCMC) algorithms. Controlled experiments indicate that the proposed method, while requiring significantly less user intervention, achieves perceptual results similar to those of previous approaches and performs well when dealing with naturally degraded signals.
This work proposes a parametric probabilistic approach to model damage accumulation using the double linear damage rule (DLDR) considering the existence of limited experimental fatigue data. A probabilistic version of DLDR is developed in which the joint distribution of the knee-point coordinates is obtained as a function of the joint distribution of the DLDR model input parameters. Considering information extracted from experiments containing a limited number of data points, an uncertainty quantification framework based on the Maximum Entropy Principle and Monte Carlo simulations is proposed to determine the distribution of fatigue life. The proposed approach is validated using fatigue life experiments available in the literature.
Cross-assignment of directional wave spectra is a critical task in wave data assimilation. Traditionally, most methods rely on two-parameter spectral distances or energy ranking approaches, which often fail to account for the complexities of the wave field, leading to inaccuracies. To address these limitations, we propose the Controlled Four-Parameter Method (C4PM), which independently considers four integrated wave parameters. This method enhances the accuracy and robustness of cross-assignment by offering flexibility in assigning weights and controls to each wave parameter. We compare C4PM with a two-parameter spectral distance method using data from two buoys moored 13 km apart in deep water. Although both methods produce negligible bias and high correlation, C4PM demonstrates superior performance by preventing the occurrence of outliers and achieving a lower root mean square error across all parameters. The negligible computational cost and customization make C4PM a valuable tool for wave data assimilation, improving the reliability of forecasts and model validations.
The aim of this paper is to discuss the use of Haar scattering networks, which is a very simple architecture that naturally supports a large number of stacked layers, yet with very few parameters, in a relatively broad set of pattern recognition problems, including regression and classification tasks. This architecture, basically, consists of stacking convolutional filters, that can be thought as a generalization of Haar wavelets, followed by non-linear operators which aim to extract symmetries and invariances that are later fed in a classification/regression algorithm. We show that good results can be obtained with the proposed method for both kind of tasks. We have outperformed the best available algorithms in 4 out of 18 important data classification problems, and have obtained a more robust performance than ARIMA and ETS time series methods in regression problems for data with strong periodicities.
We review our results on light scalar quarkonia (qˉq\bar{q}q and four-quark states) from (inverse) QCD Laplace sum rules (LSR) and their ratios R{\cal R} within stability criteria and including higher order perturbative (PT) corrections up to the (estimated) O(αs5){\mathcal O}(\alpha_{s}^{5}). As the Operator Product Expansion (OPE) usually converges for D68D\leqslant 6-8, we evaluated the QCD spectral functions at Lowest Order (LO) of PT QCD and up to the D=6D=6 dimension vaccum condensates. We request that the optimal results obey the constraint: Pole (Resonance) contribution to the spectral integral is larger than the QCD continuum one which excludes an on-shell mass around (500600)(500-600)MeV obtained for values of the QCD continuum threshold tc(11.5)t_c\leqslant(1\sim 1.5) GeV2^2. Our results for the different assignments of the scalar mesons are compiled in Tables 1 to 3
We present a method for solving two minimal problems for relative camera pose estimation from three views, which are based on three view correspondences of i) three points and one line and the novel case of ii) three points and two lines through two of the points. These problems are too difficult to be efficiently solved by the state of the art Groebner basis methods. Our method is based on a new efficient homotopy continuation (HC) solver framework MINUS, which dramatically speeds up previous HC solving by specializing HC methods to generic cases of our problems. We characterize their number of solutions and show with simulated experiments that our solvers are numerically robust and stable under image noise, a key contribution given the borderline intractable degree of nonlinearity of trinocular constraints. We show in real experiments that i) SIFT feature location and orientation provide good enough point-and-line correspondences for three-view reconstruction and ii) that we can solve difficult cases with too few or too noisy tentative matches, where the state of the art structure from motion initialization fails.
I characterize stochastic non-tâtonnement processes (SNTP) and argue that they are a natural outcome of General Equilibrium Theory. To do so, I revisit the classical demand theory to define a normalized Walrasian demand and a diffeomorphism that flattens indifference curves. These diffeomorphisms are applied on the three canonical manifolds in the consumption domain (i.e., the indifference and the offer hypersurfaces and the trade hyperplane) to analyze their images in the normalized and the flat domains. In addition, relations to the set of Pareto optimal allocations on Arrow-Debreu and overlapping generations economies are discussed. Then, I derive, for arbitrary non-tâtonnement processes, an Attraction Principle based on the dynamics of marginal substitution rates seen in the "floor" of the flat domain. This motivates the definition of SNTP and, specifically, of Bayesian ones (BSNTP). When all utility functions are attractive and sharp, these BSNTP are particularly well behaved and lead directly to the calculation of stochastic trade outcomes over the contract curve, which are used to model price stickiness and markets' responses to sustained economic disequilibrium, and to prove a stochastic version of the First Welfare Theorem.
A well-known feature of overlapping generations economies is that the First Welfare Theorem fails and equilibrium may be inefficient. The Cass (1972) criterion furnishes a necessary and sufficient condition for efficiency, but does not address the matter of existence of efficient equilibria, and Cass, Okuno, and Zilcha (1979) provide nonexistence examples. I develop an algorithm based on successive approximations of a nonstationary, consumption-loan, prone to savings, overlapping generations economy with finite-lived heterogeneous agents to find elements of its set of equilibria as the limit of nested compact sets. These compact sets are the result of a backward calculation through equilibrium equations that departs from the set of Pareto optimal equilibria of well-behaved tail economies. The equilibria calculated through this algorithm satisfy the Cass (1972) criterion and are used to derive the existence results on efficient equilibria.
We review our estimations on the light scalar qˉq\bar{q}q, (qˉq)(qˉq)(\bar{q}q')(\bar{q'}q) and qqqq\overline{qq'}qq' (q,qu,d,sq,q'\equiv u,d,s) states from relativistic Laplace sum rule (LSR) within stability criteria and including higher order perturbative (PT) corrections up to the (estimated) N5LO. We evaluate the QCD spectral functions at Lowest Order (LO) of PT QCD and up to the D=6D=6 dimension of quark and gluon condensates. Using stability criteria and the constraint: Pole contribution is larger than the QCD continuum one (RP/C1R_{P/C}\geqslant 1) our results exclude an on-shell mass around (500600)(500-600) MeV obtained for values of the QCD continuum threshold tc(11.5)t_c \leqslant(1\sim 1.5) GeV2^2. The complete results for the different scalar states are given in Tables 1 to 3. We conclude from the complete analysis that the assignement of the nature of the scalar mesons is not crystal clear and needs further studies.
In this article we present a numerical code, based on the collocation or pseudospectal method, which integrates the equations of the BSSN formalism in cylindrical coordinates. In order to validate the code, we carried out a series of tests, using three groups of initial data: i) pure gauge evolution; ii) Teukolsky quadrupole solution for low amplitudes and iii) Brill and Teukolsky solutions with higher amplitudes, which accounts for a deviation from the linear regime when compared to the case of low amplitudes. In practically all cases, violations of the Hamiltonian and momentum constraints were analyzed. We also analyze the behavior of the lapse function, which can characterize the collapse of gravitational waves into black holes. Furthermore, all three groups of tests used different computational mesh resolutions and different gauge choices, thus providing a general scan of most of the numerical solutions adopted.
This paper introduces CEopt (this https URL), a MATLAB tool leveraging the Cross-Entropy method for non-convex optimization. Due to the relative simplicity of the algorithm, it provides a kind of transparent ``gray-box'' optimization solver, with intuitive control parameters. Unique in its approach, CEopt effectively handles both equality and inequality constraints using an augmented Lagrangian method, offering robustness and scalability for moderately sized complex problems. Through select case studies, the package's applicability and effectiveness in various optimization scenarios are showcased, marking CEopt as a practical addition to optimization research and application toolsets.
28 Nov 2025
Recent developments in the construction of generalized Dirac duals have revealed, within the structure of the Clifford algebra CC1,3,\mathbb{C}\otimes\mathcal{C}\ell_{1,3}, the existence of distinct algebraic formulations of spinors duals with potential applications in quantum field theoretic models. In this work, after reviewing the matrix formulation, we employ the recent covariant formulation of the generalized spinor dual and establish its interplay with the algebra C1,3\mathcal{C}\ell_{1,3}. We construct dual mappings governed by groups denoted by GΩG_\Omega and introduce the notion of Ω\Omega-equivalence classes as a tool to classify dual spinors from a group-theoretic perspective.
The identification of vascular networks is an important topic in the medical image analysis community. While most methods focus on single vessel tracking, the few solutions that exist for tracking complete vascular networks are usually computationally intensive and require a lot of user interaction. In this paper we present a method to track full vascular networks iteratively using a single starting point. Our approach is based on a cloud of sampling points distributed over concentric spherical layers. We also proposed a vessel model and a metric of how well a sample point fits this model. Then, we implement the network tracking as a min-cost flow problem, and propose a novel optimization scheme to iteratively track the vessel structure by inherently handling bifurcations and paths. The method was tested using both synthetic and real images. On the 9 different data-sets of synthetic blood vessels, we achieved maximum accuracies of more than 98\%. We further use the synthetic data-set to analyse the sensibility of our method to parameter setting, showing the robustness of the proposed algorithm. For real images, we used coronary, carotid and pulmonary data to segment vascular structures and present the visual results. Still for real images, we present numerical and visual results for networks of nerve fibers in the olfactory system. Further visual results also show the potential of our approach for identifying vascular networks topologies. The presented method delivers good results for the several different datasets tested and have potential for segmenting vessel-like structures. Also, the topology information, inherently extracted, can be used for further analysis to computed aided diagnosis and surgical planning. Finally, the method's modular aspect holds potential for problem-oriented adjustments and improvements.
This paper addresses the multivariable gradient-based extremum seeking control (ESC) subject to saturation. Two distinct saturation scenarios are investigated here: saturation acting on the input of the function to be optimized, which is addressed using an anti-windup compensation strategy, and saturation affecting the gradient estimate. In both cases, the unknown Hessian matrix is represented using a polytopic uncertainty description, and sufficient conditions in the form of linear matrix inequalities (LMIs) are derived to design a stabilizing control gain. The proposed conditions guarantee exponential stability of the origin for the average closed-loop system under saturation constraints. With the proposed design conditions, non-diagonal control gain matrices can be obtained, generalizing conventional ESC designs that typically rely on diagonal structures. Stability and convergence are rigorously proven using the Averaging Theory for dynamical systems with Lipschitz continuous right-hand sides. Numerical simulations illustrate the effectiveness of the proposed ESC algorithms, confirming the convergence even in the presence of saturation.
There are no more papers matching your filters at the moment.