University of Mazandaran
As large language models (LLMs) become increasingly embedded in our daily lives, evaluating their quality and reliability across diverse contexts has become essential. While comprehensive benchmarks exist for assessing LLM performance in English, there remains a significant gap in evaluation resources for other languages. Moreover, because most LLMs are trained primarily on data rooted in European and American cultures, they often lack familiarity with non-Western cultural contexts. To address this limitation, our study focuses on the Persian language and Iranian culture. We introduce 19 new evaluation datasets specifically designed to assess LLMs on topics such as Iranian law, Persian grammar, Persian idioms, and university entrance exams. Using these datasets, we benchmarked 41 prominent LLMs, aiming to bridge the existing cultural and linguistic evaluation gap in the field.
Recent observations from NICER in X-rays and LIGO/Virgo in gravitational waves have provided critical constraints on the mass, radius, and tidal deformability of neutron stars, imposing stringent limits on the equation of state (EOS) and the behavior of ultra-dense matter. However, several key parameters influencing the EOS, such as the maximum mass of neutron stars, spin-down rates, and the potential role of exotic matter in their cores, remain subject of ongoing debate. Here we present a new approach to constraining the EOS by analyzing the X-ray afterglows of some short gamma-ray bursts, focusing on "the internal plateau" phase and its abrupt decay, which reflect the spin-down and possible collapse of a supra-massive neutron star into a black hole. By linking critical neutron star masses with black hole formation criteria and the observational data from Swift's BAT and XRT instruments with compact object models, we explore three representative EOSs that range from "soft" to "stiff". Our result supports a maximum mass for neutron stars of approximately 2.39 solar masses at the threshold of black hole formation. This conclusion holds under assumptions of magnetar-powered X-ray plateaus, constant radiative efficiency, isotropic emission, and full Kerr black hole energy extraction; deviations could influence the inferred results. Our results demonstrate the critical role of neutron star/black hole physics in probing dense nuclear matter and provide a novel framework for exploring extreme astrophysical environments.
Bearing fault diagnosis under varying working conditions faces challenges, including a lack of labeled data, distribution discrepancies, and resource constraints. To address these issues, we propose a progressive knowledge distillation framework that transfers knowledge from a complex teacher model, utilizing a Graph Convolutional Network (GCN) with Autoregressive moving average (ARMA) filters, to a compact and efficient student model. To mitigate distribution discrepancies and labeling uncertainty, we introduce Enhanced Local Maximum Mean Squared Discrepancy (ELMMSD), which leverages mean and variance statistics in the Reproducing Kernel Hilbert Space (RKHS) and incorporates a priori probability distributions between labels. This approach increases the distance between clustering centers, bridges subdomain gaps, and enhances subdomain alignment reliability. Experimental results on benchmark datasets (CWRU and JNU) demonstrate that the proposed method achieves superior diagnostic accuracy while significantly reducing computational costs. Comprehensive ablation studies validate the effectiveness of each component, highlighting the robustness and adaptability of the approach across diverse working conditions.
One of the most significant obstacles in bearing fault diagnosis is a lack of labeled data for various fault types. Also, sensor-acquired data frequently lack labels and have a large amount of missing data. This paper tackles these issues by presenting the PTPAI method, which uses a physics-informed deep learning-based technique to generate synthetic labeled data. Labeled synthetic data makes up the source domain, whereas unlabeled data with missing data is present in the target domain. Consequently, imbalanced class problems and partial-set fault diagnosis hurdles emerge. To address these challenges, the RF-Mixup approach is used to handle imbalanced classes. As domain adaptation strategies, the MK-MMSD and CDAN are employed to mitigate the disparity in distribution between synthetic and actual data. Furthermore, the partial-set challenge is tackled by applying weighting methods at the class and instance levels. Experimental outcomes on the CWRU and JNU datasets indicate that the proposed approach effectively addresses these problems.
We study the primordial perturbations and reheating process in the models where the Gauss-Bonnet term is non-minimally coupled to the canonical and non-canonical (DBI and tachyon) scalar fields. We consider several potentials and Gauss-Bonnet coupling terms as power-law, dilaton-like, cosh\cosh-type, E-model and T-model. To seek the observational viability of these models, we study the scalar perturbations numerically and compare the results with the Planck2018 TT, TE, EE+lowE+lensing+BK14+BAO joint data at 68%68\% CL and 95%95\% CL. We also study the tensor perturbations in confrontation with the Planck2018 TT, TE, EE+lowE+lensing+BK14+BAO+ LIGO&\&Virgo2016 joint data at 68%68\% CL and 95%95\% CL. In this regard, we obtain some constraints on the Gauss-Bonnet coupling parameter β\beta. Another important process in the early universe is the reheating phase after inflation which is necessary to reheat the universe for subsequent evolution. In this regard, we study the reheating process in these models and find some expressions for the e-folds number and temperature during that era. Considering that from Planck TT,TE,EE+lowEB+lensing data and BICEP2/Keck Array 2014, based on the Λ\LambdaCDM+r+dnsdlnk+r+\frac{dn_{s}}{d\ln k} model, we have ns=0.9658±0.0038n_{s}=0.9658\pm 0.0038 and r<0.072, we obtain some constraints on the e-folds number and temperature. From the values of the e-folds number and the effective equation of state and also the observationally viable value of the scalar spectral index, we explore the capability of the models in explaining the reheating phase.
This study investigates the nonlinear charged Anti-de Sitter (AdS) black hole solution within the framework of massive gravity, motivated by recent advancements linking the Weak Gravity Conjecture (WGC) to phenomena such as Weak Cosmic Censorship Conjecture (WCCC) and photon sphere dynamics. Building on these foundations, we focus on the Aschenbach effect-a relativistic phenomenon intricately tied to the geometry of photon spheres and known to occur in some special sub-extremal non rotating black holes. Our primary objective is to determine whether this effect persists not only up to the extremal limit but also beyond, into the superextremal regime, thus probing the stability and validity of black hole characteristics in these extreme conditions. By analyzing the nonlinear charged AdS black hole solutions in massive gravity, we demonstrate that the Aschenbach effect remains a robust feature across both extremal and superextremal configurations. This extension suggests that key relativistic signatures and the underlying spacetime structures associated with high-spin black holes continue to hold beyond classical boundaries. Our results provide new insights into the behavior of ultra-compact objects and highlight promising directions for exploring the limits of general relativity, as well as potential generalizations of the WGC and WCC in strong gravitational fields.
In this paper, we delve into the intricate thermodynamic topology of quantum-corrected Anti-de Sitter-Reissner-Nordstrm (AdS-RN) black hole within the framework of Kiselev spacetime. By employing the generalized off-shell Helmholtz free energy approach, we meticulously compute the thermodynamic topology of these selected black holes. Furthermore, we establish their topological classifications. Our findings reveal that quantum correction terms influence the topological charges of black holes in Kiselev spacetime, leading to novel insights into topological classifications. Our research findings elucidate that, in contrast to the scenario in which ω=0\omega=0 and a=0.7a=0.7 with total topological charge W=0W=0 and ω=4/3\omega=-4/3 with total topological charge W=1W=-1, in other cases, the total topological charge for the black hole under consideration predominantly stabilizes at +1. This stabilization occurs with the significant influence of the parameters a, c, and ω\omega on the number of topological charges. Specifically, when ω\omega assumes the values of ω=1/3\omega=-1/3, ω=2/3\omega=-2/3 , ω=1\omega=-1, the total topological charge consistently be to W = +1.
In this work, we consider two recently introduced novel regular black hole solutions and investigate the circular null geodesics to find the connection between the photon sphere, the horizon and the black hole shadow radii. We also study the energy emission rate for these solutions and discuss how the parameters of models affect the emission of particles around the black holes. Furthermore, we compare the resulting shadow of these regular black holes with observational data of the Event Horizon Telescope and find the allowed regions of the model parameters for which the obtained shadow is consistent with the data. Finally, we employ the correspondence between the quasinormal modes in the eikonal limit and shadow radius to study the scalar field perturbations in these backgrounds.
This research from the University of Mazandaran refines alpha decay half-life predictions by explicitly incorporating the Pauli Exclusion Principle's kinetic energy contribution into the Double-Folding model. The improved approach, particularly with a newly derived 'pocket formula' for the repulsive term, reduced the standard deviation from experimental data to 0.298, demonstrating enhanced accuracy in describing nuclear interactions and decay.
Recent data from elliptical galaxies indicate that the growth in the masses of black holes exceeds what is expected from the accretion of surrounding matter, and this growth appears to be dependent on the expansion of the universe. This phenomenon can be explained by considering the accretion of dark energy, which is responsible for cosmological expansion, into these black holes. In this paper, we investigate the perturbative interaction of a black hole with a real scalar field ϕ\phi (which can represent dark energy) in a cosmological background using an appropriate metric. We derive solutions for the field ϕ(t,r)\phi(t, r), the black hole mass M(t)M(t), and the expansion rate H(t)H(t), and discuss the behaviour of the scalar field ϕ\phi in the vicinity of the black hole, with respect to exterior and interior observers. We obtain the energy density of ϕ\phi inside of the black hole and find that energy density converges and tends to take a non-vanishing fixed values (stable) at late time of cosmological expansion. We find that on horizon surfaces, it is possible to make the field ϕ\phi continuous and bounded (maybe differentiable) function.
By considering the Friedmann equations emerging from the entropy-area law of black hole thermodynamics in the context of the generalized uncertainty principle, we study tachyon inflation in the early universe. The presence of a minimal length modifies the Friedmann equations and hence the slow-roll and perturbation parameters in the tachyon model. These modifications, though small, affect the viability of the tachyon inflation in confrontation with observational data. We compare the numerical results of the model with Planck2018 TT, TE, EE +lowE+lensing+BAO+BK14(18) data and Planck2018 TT, TE,EE +lowE+lensing+BK14(18) +BAO+LIGO &\& Virgo2016 data at 68%68\% and 95%95\% CL. We show that while the tachyon inflation with power-law, inverse power-law and inverse exponential potentials is not observationally viable in comparison with the 1σ1\sigma and 2σ2\sigma confidence levels of the new joint data, in the presence of the minimal length the model becomes observationally viable.
In the last few decades, studies in various fields of plasma technology have expanded and its application in different processes has increased. Therefore, the achievement of a desirable and practical plasma with specific characteristics is of particular importance. The frequency of the applied voltage is one of the important factors that play a role in the physical and chemical characteristics. In this research, changes in the density of active species produced in an electrical discharge using a dielectric barrier and air working gas have been investigated from a frequency of 500 Hz to 500 kHz, and by applying a constant voltage of 2 kV, have been investigated. For this purpose, 87 different reactions with specific collision cross-sections were defined in COMSOL Multiphysics. Other parameters, including current-voltage waveform, electric field, and species densitywere evaluated. The results show that under completely identical conditions, the electron temperature distribution changes with increasing applied frequency, and the density of reactive oxygen and nitrogen species RONS decreases, but O shows an increasing trend. It should be noted that the simulation results are in good agreement with previous experimental and simulation reports. These results offer valuable insights into optimizing plasma parameters for different applications, potentially resulting in better treatment outcomes across a range of therapeutic domains.
This study presents an integrated modeling and optimization framework for a steam methane reforming (SMR) reactor, combining a mathematical model, artificial neural network (ANN)-based hybrid modeling, advanced multi-objective optimization (MOO) and multi-criteria decision-making (MCDM) techniques. A one-dimensional fixed-bed reactor model accounting for internal mass transfer resistance was employed to simulate reactor performance. To reduce the high computational cost of the mathematical model, a hybrid ANN surrogate was constructed, achieving a 93.8% reduction in average simulation time while maintaining high predictive accuracy. The hybrid model was then embedded into three MOO scenarios using the non-dominated sorting genetic algorithm II (NSGA-II) solver: 1) maximizing methane conversion and hydrogen output; 2) maximizing hydrogen output while minimizing carbon dioxide emissions; and 3) a combined three-objective case. The optimal trade-off solutions were further ranked and selected using two MCDM methods: technique for order of preference by similarity to ideal solution (TOPSIS) and simplified preference ranking on the basis of ideal-average distance (sPROBID). Optimal results include a methane conversion of 0.863 with 4.556 mol/s hydrogen output in the first case, and 0.988 methane conversion with 3.335 mol/s hydrogen and 0.781 mol/s carbon dioxide in the third. This comprehensive methodology offers a scalable and effective strategy for optimizing complex catalytic reactor systems with multiple, often conflicting, objectives.
The latest video coding standard, Versatile Video Coding (VVC), achieves almost twice coding efficiency compared to its predecessor, the High Efficiency Video Coding (HEVC). However, achieving this efficiency (for intra coding) requires 31x computational complexity compared to HEVC, making it challenging for low power and real-time applications. This paper, proposes a novel machine learning approach that jointly and separately employs two modalities of features, to simplify the intra coding decision. First a set of features are extracted that use the existing DCT core of VVC, to assess the texture characteristics, and forms the first modality of data. This produces high quality features with almost no overhead. The distribution of intra modes at the neighboring blocks is also used to form the second modality of data, which provides statistical information about the frame. Second, a two-step feature reduction method is designed that reduces the size of feature set, such that a lightweight model with a limited number of parameters can be used to learn the intra mode decision task. Third, three separate training strategies are proposed (1) an offline training strategy using the first (single) modality of data, (2) an online training strategy that uses the second (single) modality, and (3) a mixed online-offline strategy that uses bimodal learning. Finally, a low-complexity encoding algorithms is proposed based on the proposed learning strategies. Extensive experimental results show that the proposed methods can reduce up to 24% of encoding time, with a negligible loss of coding efficiency. Moreover, it is demonstrated how a bimodal learning strategy can boost the performance of learning. Lastly, the proposed method has a very low computational overhead (0.2%), and uses existing components of a VVC encoder, which makes it much more practical compared to competing solutions.
Background: Multilayer perceptron (MLP) aided multi-objective particle swarm optimization algorithm (MOPSO) is employed in the present article to optimize the liquefied petroleum gas (LPG) thermal cracking process. This new approach significantly accelerated the multi-objective optimization (MOO), which can now be completed within one minute compared to the average of two days required by the conventional approach. Methods: MOO generates a set of equally good Pareto-optimal solutions, which are then ranked using a combination of a weighting method and five multi-criteria decision making (MCDM) methods. The final selection of a single solution for implementation is based on majority voting and the similarity of the recommended solutions from the MCDM methods. Significant Findings: The deep learning (DL) aided MOO and MCDM approach provides valuable insights into the trade-offs between conflicting objectives and a more comprehensive understanding of the relationships between them. Furthermore, this approach also allows for a deeper understanding of the impact of decision variables on the objectives, enabling practitioners to make more informed, data-driven decisions in the thermal cracking process.
In this review article we consider a special case of D=5D=5, N=2\mathcal{N}=2 supergravity called the STU model. We apply the gauge/gravity correspondence to the STU model to gain insight into properties of the quark-gluon plasma. Given that the quark-gluon plasma is in reality described by QCD, therefore we call our study STU/QCD correspondence. First, we investigate the thermodynamics and hydrodynamics of the STU background. Then we use dual picture of the theory, which is type IIB string theory, to obtain the drag force and jet-quenching parameter of an external probe quark.
An alternative theory of gravity that has attracted much attention recently is the novel four-dimensional Einstein-Gauss-Bonnet (4D EGB) gravity. The theory is rescaled by the Gauss-Bonnet (GB) coupling constant $\alpha \rightarrow \alpha/(D - 4)in in D $ dimensions and redefined as four-dimensional gravity in the limit D4D \rightarrow 4. Thus, in this manner, the GB term yields a non-trivial contribution to the gravitational dynamics. In fact, regularized black hole solutions and applications in the novel 4D EGB gravity have also been extensively explored. In this work, motivated by recent astrophysical observations, we present an in-depth study of the optical features of AdS black holes in the novel 4D EGB gravity coupled to exponential nonlinear electrodynamics (NED), such as the shadow geometrical shape, the energy emission rate, the deflection angle and quasinormal modes. Taking into account these dynamic quantities, we investigate the effects on the black hole solution by varying the parameters of the models. More specifically, we show that the variation of the GB and NED parameters, and of the cosmological constant, imprints specific signatures on the optical features of AdS black holes in the novel 4D EGB gravity coupled to nonlinear electrodynamics, thus leading to the possibility of directly testing these black hole models by using astrophysical observations.
Consideration of extra spatial dimensions is motivated by the unification of gravity with other interactions, the achievement of the ultimate framework of quantum gravity, and fundamental problems in particle physics and cosmology. Much attention has been focused on the effect of these extra dimensions on the modified theories of gravity. Analytically examining astrophysical phenomena like black hole shadows is one approach to understand how extra dimensions would affect the modified gravitational theories. The purpose of this study is to derive a higher dimensional metric for a dark compact object in STVG theory and then examine the behavior of the shadow shapes for this solution in STVG theory in higher dimensions. We apply the Carter method to formulate the geodesic equations and the Hamilton Jacobi method to find photon orbits around this higher dimensional MOG dark compact object. We investigate the effects of extra dimensions and the STVG parameter alpha on the black hole shadow size. Next, we compare the shadow radius of this higher dimensional MOG dark compact object to the shadow size of the supermassive black hole M87, which has been realized by the Event Horizon Telescope (EHT) collaborations, in order to restrict these parameters. We find that extra dimensions in the STVG theory typically lead to a reduction in the shadow size of the higher dimensional MOG dark compact object, whereas the effect of parameter alpha on this black hole shadow is suppressible. Remarkably, given the constraints from EHT observations, we find that the shadow size of the four dimensional MOG dark compact object lies in the confidence levels of the EHT data. Finally, we investigate the issue of acceleration bounds in higher dimensional MOG dark compact object in confrontation with EHT data of M87.
In this study, we investigate the dynamics and frame-by-frame phase transition of the first order in black hole thermodynamics. For our analysis, we will utilize the Kramers escape rate. Our focus is on charged anti-de Sitter (AdS) black holes influenced by Kaniadakis and Barrow statistics. The selection of these black holes aims to examine the effects of entropy variation on the dynamics of phase transition and to demonstrate that the Kramers escape rate, as an efficient tool, can effectively represent the dynamic transition from a small to a large black hole within the domain of first-order phase transitions. It is noteworthy that while the transition from small to large black holes should ostensibly dominate the entire process, our results indicate that the escape rate undergoes changes as it passes through the midpoint of the phase transition, leading to a reverse escape phenomenon. The findings suggest that the dynamic phase transition in charged AdS black holes affected by entropy change bears a significant resemblance to the outcomes of models influenced by Bekenstein-Hawking entropy\cite{23}. This similarity in results could serve as an additional motivation to further explore the potential capabilities of Kaniadakis and Barrow statistics in related cosmological fields. These capabilities could enhance our understanding of other cosmological properties
In this paper, we base our analysis on the assumption that the existence of a photon sphere is an intrinsic feature of any ultra-compact gravitational structure with spherical symmetry. Utilizing the concept of a topological photon sphere, we categorize the behaviors of various gravitational models based on the structure of their photon spheres. This innovative approach enables us to define boundaries for black hole parameters, subsequently allowing us to classify the model as either a black hole or a naked singularity. Indeed, we will demonstrate that the presence of this interplay between the gravitational structure and the existence of a photon sphere is a unique advantage that can be utilized from both perspectives. Our observations indicate that a gravitational model typically exhibits the behavior of a horizonless structure (or a naked singularity) when a minimum effective potential (a stable photon sphere) appears within the studied spacetime region. Additionally, in this study, we tried to investigate the effect of this structure on the behavior of the photon sphere by choosing models that are affected by the Perfect Fluid Dark Matter (PFDM). Finally, by analyzing a model with multiple event horizons, we show that the proposed method remains applicable even in such scenarios.
There are no more papers matching your filters at the moment.