Kharazmi University
This study presents the first extended comparison of cosmic filaments identified in SDSS DR10 observations (z < 0.05) and the IllustrisTNG300-1 Λ\LambdaCDM simulation (z=0z = 0), utilizing the novel GrAviPaSt filament-finder method. The analyses are performed on both macro- and micro-filaments, each characterized by their length, thickness, and contrast in mass density. In addition to total sample comparisons, two subcategories of micro-filaments, GG (linking galaxy groups) and CC (linking galaxy clusters), are introduced to further analyze discrepancies between the Λ\LambdaCDM model and observation. While Λ\LambdaCDM produces extended macro-filaments, such structures are largely absent in SDSS, and where present, they exhibit higher densities than their simulated counterparts. Micro-filaments also show notable density discrepancies: at fixed length and thickness, observational filaments are significantly denser than those in the simulation. Employing radial density profiles reveal that micro-filaments in the Λ\LambdaCDM simulation exhibit higher contrasts in mass density relative to the background compared to their observational counterparts. Notably, CC type micro-filaments displayed enhanced density contrasts over GG types in the simulation, while observational data showed the opposite trend. Furthermore, SDSS galaxies in both GG and CC micro-filaments exhibit lower specific star formation rates (sSFR) and older stellar populations, while TNG300-1 micro-filaments host more actively star-forming galaxies within the intermediate stellar mass range. These results reveal persistent discrepancies between observational data and the Λ\LambdaCDM reconstruction of cosmic filaments, pointing to possible tensions in our current understanding of large-scale structures and their environmental effects on galaxy evolution.
This study analyzes the evolutionary pathways and merger histories of low-mass galaxies (10^8.5 This study statistically analyzes the merger histories of low-mass, star-forming and quenched galaxies across void and cluster environments within the IllustrisTNG300-1 simulation. It clarifies how environmental factors affect galaxy quenching and reveals that mergers, particularly mini mergers, primarily enhance star formation in low-mass star-forming galaxies.
We investigate observational constraints on cubic curvature corrections to general relativity by analyzing quasi-periodic oscillations (QPOs) in accreting black hole systems. In particular, we study Kerr black hole solution corrected by cubic curvature terms parameterized by β5\beta_5 and β6\beta_6. While β6\beta_6 corresponds to a field-redefinition invariant structure, the β5\beta_5 term can in principle be removed via a field redefinition. Nonetheless, since we work in the frame where the accreting matter minimally couples to the metric, β5\beta_5 is in general present. Utilizing the corrected metric, we compute the QPO frequencies within the relativistic precession framework. Using observational data from GRO J1655-40 and a Bayesian analysis, we constrain the coupling parameters to -12.31<\frac{\beta_5}{(5 M_\odot)^4}<24.15 and -1.99<\frac{\beta_6}{(5 M_\odot)^4}<0.30 at 2-σ\sigma. These bounds improve upon existing constraints from big-bang nucleosynthesis and the speed of gravitational waves.
Dielectric barrier discharge (DBD) plasma is used for various applications. DBD is also one of the most efficient and low-cost methods for active fluid flow control. In this study, a detailed physical model of DBD in atmospheric pressure at 1kV DC voltage is developed with COMSOL Multiphysics software. Argon gas is also used as a background gas and electrodes are assumed to be copper. Plasma parameters such as electron and ion density, electric field, potential, and temperature for different distances of electrodes (1.0mm, 0.9mm, 0.8mm) have been investigated. Moreover, the effect of dielectric type (Quartz, Silica Glass, Mica) on these key parameters is investigated. The results of the simulation show that the longitudinal distance of the buried electrodes to the exposed electrodes has a direct influence on parameters such as electron temperature, and electron and ion density which are the main factors of fluid flow control. These parameters have the maximum value when mica is used as a dielectric and the lowest value when silica glass is utilized.
Cosmic voids, the largest under-dense structures in the Universe, are crucial for exploring galaxy evolution. These vast, sparsely populated regions are home to void galaxies -- predominantly gas-rich, star-forming, and blue -- that evolve more slowly than those in denser environments. Additionally, the correlation between galaxy mergers and specific properties of galaxies, such as the star formation rate (SFR), is not fully understood, particularly in these under-dense environments. Quenched void galaxies exhibit high SFRs at high redshifts, significantly decreasing at lower redshifts (z < 0.5). These galaxies have higher dark matter halos than star-forming galaxies across all redshifts, leading to rapid gas consumption. They formed earlier and experienced more major mergers in earlier epochs but fewer recent mergers, resulting in a lack of fresh gas for sustained star formation. Also, star-forming and high-mass quenched void galaxies show higher SFRs in mergers compared to non-merger galaxies. This study highlights that formation time, merger rates, and dark matter halos play a crucial role in the star formation history of void galaxies. Rapid and earlier gas consumption due to earlier formation time and the absence of recent mergers could lead to quenched void galaxies at lower redshifts, providing valuable insights into galaxy evolution in low-density environments.
Sentiment analysis aims to extract people's emotions and opinion from their comments on the web. It widely used in businesses to detect sentiment in social data, gauge brand reputation, and understand customers. Most of articles in this area have concentrated on the English language whereas there are limited resources for Persian language. In this review paper, recent published articles between 2018 and 2022 in sentiment analysis in Persian Language have been collected and their methods, approach and dataset will be explained and analyzed. Almost all the methods used to solve sentiment analysis are machine learning and deep learning. The purpose of this paper is to examine 40 different approach sentiment analysis in the Persian Language, analysis datasets along with the accuracy of the algorithms applied to them and also review strengths and weaknesses of each. Among all the methods, transformers such as BERT and RNN Neural Networks such as LSTM and Bi-LSTM have achieved higher accuracy in the sentiment analysis. In addition to the methods and approaches, the datasets reviewed are listed between 2018 and 2022 and information about each dataset and its details are provided.
The global fashion industry plays a pivotal role in the global economy, and addressing fundamental issues within the industry is crucial for developing innovative solutions. One of the most pressing challenges in the fashion industry is the mismatch between body shapes and the garments of individuals they purchase. This issue is particularly prevalent among individuals with non-ideal body shapes, exacerbating the challenges faced. Considering inter-individual variability in body shapes is essential for designing and producing garments that are widely accepted by consumers. Traditional methods for determining human body shape are limited due to their low accuracy, high costs, and time-consuming nature. New approaches, utilizing digital imaging and deep neural networks (DNN), have been introduced to identify human body shape. In this study, the Style4BodyShape dataset is used for classifying body shapes into five categories: Rectangle, Triangle, Inverted Triangle, Hourglass, and Apple. In this paper, the body shape segmentation of a person is extracted from the image, disregarding the surroundings and background. Then, Various pre-trained models, such as ResNet18, ResNet34, ResNet50, VGG16, VGG19, and Inception v3, are used to classify the segmentation results. Among these pre-trained models, the Inception V3 model demonstrates superior performance regarding f1-score evaluation metric and accuracy compared to the other models.
Well-spread samples are desirable in many disciplines because they improve estimation when target variables exhibit spatial structure. This paper introduces an integrated methodological framework for spreading samples over the population's spatial coordinates. First, we propose a new, translation-invariant spreadness index that quantifies spatial balance with a clear interpretation. Second, we develop a clustering method that balances clusters with respect to an auxiliary variable; when the auxiliary variable is the inclusion probability, the procedure yields clusters whose totals are one, so that a single draw per cluster is, in principle, representative and produces units optimally spread along the population coordinates, an attractive feature for finite population sampling. Third, building on the graphical sampling framework, we design an efficient sampling scheme that further enhances spatial balance. At its core lies an intelligent, computationally efficient search layer that adapts to the population's spatial structure and inclusion probabilities, tailoring a design to each specific population to maximize spread. Across diverse spatial patterns and both equal- and unequal-probability regimes, this intelligent coupling consistently outperformed all rival spread-oriented designs on dispersion metrics, while the spreadness index remained informative and the clustering step improved representativeness.
26 Jun 2016
In the spectral Petrov-Galerkin methods, the trial and test functions are required to satisfy particular boundary conditions. By a suitable linear combination of orthogonal polynomials, a basis, that is called the modal basis, is obtained. In this paper, we extend this idea to the non-orthogonal dual Bernstein polynomials. A compact general formula is derived for the modal basis functions based on dual Bernstein polynomials. Then, we present a Bernstein-spectral Petrov-Galerkin method for a class of time fractional partial differential equations with Caputo derivative. It is shown that the method leads to banded sparse linear systems for problems with constant coefficient. Some numerical examples are provided to show the efficiency and the spectral accuracy of the method.
In the context of the standard model of particles, the weak interaction of cosmic microwave background (CMB) and cosmic neutrino background (Cν\nuB), can generate non-vanishing TB and EB power spectra in the order of one loop forward scattering, in the presence of scalar perturbation, which is in contrast with the standard scenario cosmology. Comparing our results with the current experimental data may provide, significant information about the nature of Cν\nuB, including CMB-Cν\nuB forward scattering for TB, TE, and EB power spectra. To this end, different cases were studied, including Majorana Cν\nuB and Dirac Cν\nuB. On the other hand, it was shown that the mean opacity due to cosmic neutrino background could behave as an anisotropic birefringent medium and change the linear polarization rotation angle. Considering the contributions from neutrino and anti-neutrino forward scattering with CMB photons (in the case of Dirac neutrino), we introduce relative neutrino and anti-neutrino density asymmetry (δν=Δnνnν=nνnνˉnν\delta_\nu=\frac{\Delta n_\nu}{n_\nu}=\frac{n_\nu-n_{\bar{\nu}}}{n_\nu}). Then, using the cosmic birefringence angle reported by the Planck data release β=0.30±0.11\beta=0.30^\circ\pm0.11^\circ (68%C.L.68\%C.L.), some constraints can be put on δν\delta_\nu. Also, the value of cosmic birefringence due to Majorana Cν\nuB medium is estimated at about βν0.2\beta|_\nu\simeq0.2 rad. In this respect, since Majorana neutrino and anti-neutrino are exactly the same, both CB contributions will be added together. However, this value is at least two orders larger than the cosmic birefringence angle reported by the Planck data release, β=0.30±0.11\beta=0.30^\circ\pm0.11^\circ (68%C.L.68\%C.L.).
The increasing integration of the Internet of Medical Things (IoMT) into healthcare systems has significantly enhanced patient care but has also introduced critical cybersecurity challenges. This paper presents a novel approach based on Convolutional Neural Networks (CNNs) for detecting cyberattacks within IoMT environments. Unlike previous studies that predominantly utilized traditional machine learning (ML) models or simpler Deep Neural Networks (DNNs), the proposed model leverages the capabilities of CNNs to effectively analyze the temporal characteristics of network traffic data. Trained and evaluated on the CICIoMT2024 dataset, which comprises 18 distinct types of cyberattacks across a range of IoMT devices, the proposed CNN model demonstrates superior performance compared to previous state-of-the-art methods, achieving a perfect accuracy of 99% in binary, categorical, and multiclass classification tasks. This performance surpasses that of conventional ML models such as Logistic Regression, AdaBoost, DNNs, and Random Forests. These findings highlight the potential of CNNs to substantially improve IoMT cybersecurity, thereby ensuring the protection and integrity of connected healthcare systems.
In this article we describe an application of Machine Learning (ML) and Linguistic Modeling to generate persian poems. In fact we teach machine by reading and learning persian poems to generate fake poems in the same style of the original poems. As two well known poets we used Hafez (1310-1390) and Saadi (1210-1292) poems. First we feed the machine with Hafez poems to generate fake poems with the same style and then we feed the machine with the both Hafez and Saadi poems to generate a new style poems which is combination of these two poets styles with emotional (Hafez) and rational (Saadi) elements. This idea of combination of different styles with ML opens new gates for extending the treasure of past literature of different cultures. Results show with enough memory, processing power and time it is possible to generate reasonable good poems.
This paper presents a novel hybrid algorithm named Since Cosine Crow Search Algorithm. To propose the SCCSA, two novel algorithms are considered including Crow Search Algorithm (CSA) and Since Cosine Algorithm (SCA). The advantages of the two algorithms are considered and utilize to design an efficient hybrid algorithm which can perform significantly better in various benchmark functions. The combination of concept and operators of the two algorithms enable the SCCSA to make an appropriate trade-off between exploration and exploitation abilities of the algorithm. To evaluate the performance of the proposed SCCSA, seven well-known benchmark functions are utilized. The results indicated that the proposed hybrid algorithm is able to provide very competitive solution comparing to other state-of-the-art meta heuristics.
In this paper by making use of the "Complexity=Action" proposal, we study the complexity growth after shock waves in holographic field theories. We consider both double black hole-Vaidya and AdS-Vaidya with multiple shocks geometries. We find that the Lloyd's bound is respected during the thermalization process in each of these geometries and at the late time, the complexity growth saturates to the value which is proportional to the energy of the final state. We conclude that the saturation value of complexity growth rate is independent of the initial temperature and in the case of thermal initial state, the rate of complexity is always less than the value for the vacuum initial state such that considering multiple shocks it gets more smaller. Our results indicate that by increasing the temperature of the initial state, the corresponding rate of complexity growth starts far from final saturation rate value.
We explore a theoretical framework in which Lorentz symmetry is explicitly broken by incorporating derivative terms of the extrinsic curvature into the gravitational action. These modifications introduce a scale-dependent damping effect in the propagation of gravitational waves (GWs), governed by a characteristic energy scale denoted as MLVM_{{LV}} . We derive the modified spectral energy density of GWs within this model and confront it with recent observational data from the NANOGrav 15-year dataset and the second data release of the International Pulsar Timing Array (IPTA). Our analysis yields a lower bound on the Lorentz-violating energy scale, finding $M_{{LV}} > 10^{-19}$ GeV at 68\% confidence level. This result significantly improves upon previous constraints derived from LIGO/VIRGO binary merger observations. Our findings demonstrate the potential of pulsar timing arrays to probe fundamental symmetries of spacetime and offer new insights into possible extensions of general relativity.
In this work, we design an advanced quantum readout architecture that integrates a four qubit superconducting chip with a novel parametric amplifier ended with analog front-end circuit. Unlike conventional approaches, this design eliminates the need for components such as Purcell filters. Instead, a Josephson Parametric Amplifier is engineered to simultaneously perform quantum-limited signal amplification and suppress qubit energy leakage. The design features a tailored gain profile across C-band, with sharp peaks (24 dB) and troughs (0 dB), enabling qubit frequencies to align with gain minima and resonator frequencies with gain maxima.
In this paper, the single machine scheduling problem with deteriorating jobs and learning effects are considered, which is shown in the previous research that the SDR method no longer provides an optimal solution for the problem. In order to solve the problem, a new exact algorithm is proposed. Various test problems are solved to evaluate the performance of the proposed heuristic algorithm using different measures. The results indicate that the algorithm can solve various test problems with small, medium and large sizes in a few seconds with an error around 1% where solving the test problems with more than 15 jobs is almost impossible by examining all possible permutations in both complexity and time aspects.
In this paper, we develop a Bernstein dual-Petrov-Galerkin method for the numerical simulation of a two-dimensional fractional diffusion equation. A spectral discretization is applied by introducing suitable combinations of dual Bernstein polynomials as the test functions and the Bernstein polynomials as the trial ones. We derive the exact sparse operational matrix of differentiation for the dual Bernstein basis which provides a matrix based approach for the spatial discretization. It is shown that the method leads to banded linear systems that can be solved efficiently. The stability and convergence of the proposed method is discussed. Finally, some numerical examples are provided to support the theoretical claims and to show the accuracy and efficiency of the method.
Cosmic voids are large, nearly empty regions that lie between the web of galaxies, filaments and walls, and are recognized for their extensive applications in the field of cosmology and astrophysics. Despite their significance, a universal definition of voids remains unsettled as various void-finding methods identify different types of voids, each differing in shape and density, based on the method that were used. In this paper, we present VEGA, a novel algorithm for void identification. VEGA utilizes Voronoi tessellation to divide the dataset space into spatial cells and applies the Convex Hull algorithm to estimate the volume of each cell. It then integrates Genetic Algorithm analysis with luminosity density contrast to filter out over-dense cells and retain the remaining ones, referred to as void block cells. These filtered cells form the basis for constructing the final void structures. VEGA operates on a grid of points, which increases the algorithm's spatial accessibility to the dataset and facilitates the identification of seed points around which the algorithm constructs the voids. To evaluate VEGA's performance, we applied both VEGA and the Aikio Mähönen method to the same test dataset. We compared the resulting void populations in terms of their luminosity and number density contrast, as well as their morphological features such as sphericity. This comparison demonstrated that the VEGA void finding method yields reliable results and can be effectively applied to various particle distributions.
We consider a non-canonic field in the context of Starobinsky inflation. We work in Einstein-frame. In this frame, the gravitational part of the action is equivalent to the Hilbert-Einstein action, plus a scalar field called scalaron. We investigate a model with a heavy scalaron trapped at the effective potential minimum, where its fluctuations are negligible. To be more explicit, we consider a Dirac-Born-Infeld (DBI) field, which is usually considered within the brane inflation context, as the non-canonic field. Although, the DBI field governs inflation through implicit dependence on Scalaron the boost factor, and other quantities are different from the standard DBI model. For appropriate parameters, this model is consistent with the Planck results.
There are no more papers matching your filters at the moment.