Hakim Sabzevari University
We present a comprehensive, physics aware deep learning framework for constructing fast and accurate surrogate models of rarefied, shock containing micro nozzle flows. The framework integrates three key components, a Fusion DeepONet operator learning architecture for capturing parameter dependencies, a physics-guided feature space that embeds a shock-aligned coordinate system, and a two-phase curriculum strategy emphasizing high-gradient regions. To demonstrate the generality and inductive bias of the proposed framework, we first validate it on the canonical viscous Burgers equation, which exhibits advective steepening and shock like gradients.
The idea of the universal function is fundamental advantage of proximity potential model. We present a systematic study of the role of universal function of the proximity potential model on alpha decay process for 250 ground state to ground state transitions using WKB approximation. In order to realize this goal, five universal functions proposed in the proximity models gp 77, Guo 2013, Ngo 80, Zhang 2013 and Prox. 2010 have been incorporated into the formalism of Prox. 77. The obtained results show that the radial behavior of universal function affects the penetration probability of the {\alpha} particle. The theoretical {\alpha} decay half lives are calculated and compared with the corresponding experimental data. It is shown that the Prox. 77 with Guo 2013 universal function provides the best fit to the available data. The role of the different universal functions in the {\alpha} decay half lives of super heavy nuclei (SHN) with Z from 104 to 118 are also studied. It is shown that the experimental half lives in the super heavy mass region are described well using the Prox. 77 with Zhang 2013 universal function. In this paper, the validity of the original and modified forms of the proximity 77 potential is also examined for complete fusion reactions between {\alpha} particle and 10 different target nuclei. Our results show that the measured {\alpha} induced fusion reaction cross sections can be well reproduced using the Prox. 77 with Zhang 2013 universal function for the reactions involving light and medium nuclei. Whereas, the Prox. 77 with Guo 2013 universal function model demonstrates a reliable agreement with the experimental data at sub-barrier energies for heavier systems.
As edge and fog computing become central to modern distributed systems, there's growing interest in combining serverless architectures with privacy-preserving machine learning techniques like federated learning (FL). However, current simulation tools fail to capture this integration effectively. In this paper, we introduce FedFog, a simulation framework that extends the FogFaaS environment to support FL-aware serverless execution across edge-fog infrastructures. FedFog incorporates an adaptive FL scheduler, privacy-respecting data flow, and resource-aware orchestration to emulate realistic, dynamic conditions in IoT-driven scenarios. Through extensive simulations on benchmark datasets, we demonstrate that FedFog accelerates model convergence, reduces latency, and improves energy efficiency compared to conventional FL or FaaS setups-making it a valuable tool for researchers exploring scalable, intelligent edge systems.
For a positive integer k1k\ge 1, a graph GG is kk-stepwise irregular (kk-SI graph) if the degrees of every pair of adjacent vertices differ by exactly kk. Such graphs are necessarily bipartite. Using graph products it is demonstrated that for any k1k\ge 1 and any d2d \ge 2 there exists a kk-SI graph of diameter dd. A sharp upper bound for the maximum degree of a kk-SI graph of a given order is proved. The size of kk-SI graphs is bounded in general and in the special case when gcd(Δ(G),k)=1\gcd(\Delta(G), k) = 1. Along the way the degree complexity of a graph is introduced and used.
Training multi-layer neural networks (MLNNs), a challenging task, involves finding appropriate weights and biases. MLNN training is important since the performance of MLNNs is mainly dependent on these network parameters. However, conventional algorithms such as gradient-based methods, while extensively used for MLNN training, suffer from drawbacks such as a tendency to getting stuck in local optima. Population-based metaheuristic algorithms can be used to overcome these problems. In this paper, we propose a novel MLNN training algorithm, CenDE-DOBL, that is based on differential evolution (DE), a centroid-based strategy (Cen-S), and dynamic opposition-based learning (DOBL). The Cen-S approach employs the centroid of the best individuals as a member of population, while other members are updated using standard crossover and mutation operators. This improves exploitation since the new member is obtained based on the best individuals, while the employed DOBL strategy, which uses the opposite of an individual, leads to enhanced exploration. Our extensive experiments compare CenDE-DOBL to 26 conventional and population-based algorithms and confirm it to provide excellent MLNN training performance.
Innovation is becoming ever more pivotal to national development strategies but measuring and comparing innovation performance across nations is still a methodological challenges. This research devises a new time-series similarity method that integrates Seasonal-Trend decomposition (STL) with Fast Dynamic Time Warping (DTW) to examine Irans innovation trends by comparison with its regional peers. Owing to data availability constraints of Global Innovation Index data , research and development spending as a proportion of GDP is used as a proxy with its limitations clearly noted. Based on World Bank indicators and an Autoencoder based imputation technique for missing values, the research compares cross-country similarities and determines theme domains best aligned with Irans innovation path. Findings indicate that poverty and health metrics manifest the strongest statistical similarity with R and D spending in Iran, while Saudi Arabia, Oman, and Kuwait show the most similar cross country proximity. Implications are that Iranian innovation is more intrinsically connected with social development dynamics rather than conventional economic or infrastructure drivers, with region-specific implications for STI policy.
In this letter, we investigate the deformation of the ModMax theory, as a unique Lagrangian of non-linear electrodynamics preserving both conformal and electromagnetic-duality invariance, under TTˉT\bar{T}-like flows. We will show that the deformed theory is the generalized non-linear Born-Infeld electrodynamics. Being inspired by the invariance under the flow equation for Born-Infeld theories, we propose another TTˉT\bar{T}-like operator generating the ModMax and generalized Born-Infeld non-linear electrodynamic theories from the usual Maxwell and Born-Infeld theories, respectively.
Graph matching is one of the most important problems in graph theory and combinatorial optimization, with many applications in various domains. Although meta-heuristic algorithms have had good performance on many NP-Hard and NP-Complete problems, for this problem there are not reported superior solutions by these algorithms. The reason of this inefficiency has not been investigated yet. In this paper we show that simulated annealing as an stochastic optimization method is unlikely to be even close to the optimal solution for this problem. In addition to theoretical discussion, the experimental results also verified our idea; for example, in two sample graphs, the probability of reaching to a solution with more than three correct matches is about 0.020.02 in simulated annealing.
This work is focused on the analysis of the corrective effects of the temperature, surface tension, and nuclear matter density on the fusion barriers and also fusion cross sections caused by the original version of the proximity formalism (Prox.77 model) for 62 fusion reactions. A systematic comparison between the theoretical and empirical data of the height and position of the barrier as well as the fusion cross sections reveals that the agreement of these data improves by imposing each of the mentioned physical effects on the Prox.77 model for our selected mass range. Moreover, it is shown that the density-dependence effects have the most important role in improving the theoretical results of the proximity potential.
We employ the "complexity equals action" conjecture to investigate the action growth rate for the charged and neutral AdS black branes of a holographic toy model consisting of Einstein-Maxwell theory in d+1d + 1-dimensional bulk spacetime with d1d - 1 massless scalar fields which is called Einstein-Maxwell-Axion (EMA) theory. From the holographic point of view, the scalar fields source a spatially dependent field theory with momentum relaxation on the boundary, which is dual to the homogeneous and isotropic black branes. We find that the growth rate of the holographic complexity within the Wheeler-DeWitt (WDW) patch saturates the corresponding Lloyd's bound at the late time limit. Especially for the neutral AdS black branes, it will be shown that the complexity growth rate at late time vanishes for a particular value of relaxation parameter βmax\beta_{max} where the temperature of the black hole is minimal. Then, we investigate the transport properties of the holographic dual theory in the minimum temperature. A non-linear contribution of the axion field kinetic term in the context of k-essence model in the four-dimensional spacetime is considered as well. We also study the time evolution of the holographic complexity for the dyonic AdS black branes in this model.
The human mental search (HMS) algorithm is a relatively recent population-based metaheuristic algorithm, which has shown competitive performance in solving complex optimisation problems. It is based on three main operators: mental search, grouping, and movement. In the original HMS algorithm, a clustering algorithm is used to group the current population in order to identify a promising region in search space, while candidate solutions then move towards the best candidate solution in the promising region. In this paper, we propose a novel HMS algorithm, HMS-OS, which is based on clustering in both objective and search space, where clustering in objective space finds a set of best candidate solutions whose centroid is then also used in updating the population. For further improvement, HMSOS benefits from an adaptive selection of the number of mental processes in the mental search operator. Experimental results on CEC-2017 benchmark functions with dimensionalities of 50 and 100, and in comparison to other optimisation algorithms, indicate that HMS-OS yields excellent performance, superior to those of other methods.
Computing the excess as a method of measuring the redundancy of frames was recently introduced to address certain issues in frame theory. In this paper, the concept of excess for fusion frames is studied. Then, several explicit methods are provided to compute the excess of fusion frames and their QQ-duals. In particular, some upper bounds for the excess of QQ-dual fusion frames are established. It turns out that, unlike ordinary frames, for every nNn \in \Bbb{N} we can provide a fusion frame with its QQ-dual whose the difference of their excess is nn. Furthermore, the connection between the excess of fusion frames and their orthogonal complement is completely characterized. Finally, several examples are exhibited to confirm the obtained results.
The numerical analysis of higher-order mixed finite-element discretizations for saddle-point problems, such as the Stokes equations, has been well-studied in recent years. While the theory and practice of such discretizations is now well-understood, the same cannot be said for efficient preconditioners for solving the resulting linear (or linearized) systems of equations. In this work, we propose and study variants of the well-known Vanka relaxation scheme that lead to effective geometric multigrid preconditioners for both the conforming Taylor-Hood discretizations and non-conforming H(div){\bf H}(\text{div})-L2L^2 discretizations of the Stokes equations. Numerical results demonstrate robust performance with respect to FGMRES iteration counts for increasing polynomial order for some of the considered discretizations, and expose open questions about stopping tolerances for effectively preconditioned iterations at high polynomial order.
In this letter, we study a holographic diffeomorphism invariant higher-derivative extension of Bergshoeff-Hohm-Townsend (BHT) cosmological gravity in the context of Wald's formalism. We calculate the entropy, mass and angular momentum of warped anti-de Sitter (WAdS3_3) black holes in ghost-free BHT massive gravity and its extension using the covariant phase space method. We also compute the central charges of the dual boundary conformal field theories (CFT) from the thermodynamics method.
03 Feb 2025
Numerical simulation of incompressible fluid flows has been an active topic of research in Scientific Computing for many years, with many contributions to both discretizations and linear and nonlinear solvers. In this work, we propose an improved relaxation scheme for higher-order Taylor-Hood discretizations of the incompressible Stokes and Navier-Stokes equations, demonstrating its efficiency within monolithic multigrid preconditioners for the linear(ized) equations. The key to this improvement is an improved patch construction for Vanka-style relaxation introducing, for the first time, overlap in the pressure degrees of freedom within the patches. Numerical results demonstrate significant improvement in both multigrid iterations and time-to-solution for the linear Stokes case, on both triangular and quadrilateral meshes. For the nonlinear Navier-Stokes case, we show similar improvements, including in the number of nonlinear iterations needed in an inexact Newton method.
The construction of Parseval fusion frames is highly desirable in a wide range of signal processing applications. In this paper, we study the problem of modifying the weights of a fusion frame in order to generate a Parseval fusion frame. To this end, we extend the notion of the scalability to the fusion frame setting. We then proceed to characterize scalable fusion Riesz bases and 11-excess fusion frames. Furthermore, we provide the necessary and sufficient conditions for the scalability of certain kk-excess fusion frames, k2k\geq 2. Finally, we present several pertinent examples to confirm the obtained results.
Fragmentation is a routine part of communication in 6LoWPAN-based IoT networks, designed to accommodate small frame sizes on constrained wireless links. However, this process introduces a critical vulnerability fragments are typically stored and processed before their legitimacy is confirmed, allowing attackers to exploit this gap with minimal effort. In this work, we explore a defense strategy that takes a more adaptive, behavior-aware approach to this problem. Our system, called Predictive-CSM, introduces a combination of two lightweight mechanisms. The first tracks how each node behaves over time, rewarding consistent and successful interactions while quickly penalizing suspicious or failing patterns. The second checks the integrity of packet fragments using a chained hash, allowing incomplete or manipulated sequences to be caught early, before they can occupy memory or waste processing time. We put this system to the test using a set of targeted attack simulations, including early fragment injection, replayed headers, and flooding with fake data. Across all scenarios, Predictive CSM preserved network delivery and maintained energy efficiency, even under pressure. Rather than relying on heavyweight cryptography or rigid filters, this approach allows constrained de vices to adapt their defenses in real time based on what they observe, not just what they're told. In that way, it offers a step forward for securing fragmented communication in real world IoT systems
In statistical modeling, prediction and explanation are two fundamental objectives. When the primary goal is forecasting, it is important to account for the inherent uncertainty associated with estimating unknown outcomes. Traditionally, confidence intervals constructed using standard deviations have served as a formal means to quantify this uncertainty and evaluate the closeness of predicted values to their true counterparts. This approach reflects an implicit aim to capture the behavioral similarity between observed and estimated values. However, advances in similarity based approaches present promising alternatives to conventional variance based techniques, particularly in contexts characterized by large datasets or a high number of explanatory variables. This study aims to investigate which methods either traditional or similarity based are capable of producing narrower confidence intervals under comparable conditions, thereby offering more precise and informative intervals. The dataset utilized in this study consists of U.S. mega cap companies, comprising 42 firms. Due to the high number of features, interdependencies among predictors are common, therefore, Ridge Regression is applied to address this issue. The research findings indicate that variance based method and LCSS exhibit the highest coverage among the analyzed methods, although they produce broader intervals. Conversely, DTW, Hausdorff, and TWED deliver narrower intervals, positioning them as the most accurate methods, despite their medium coverage rates. Ultimately, the trade off between interval width and coverage underscores the necessity for context aware decision making when selecting similarity based methods for confidence interval estimation in time series analysis.
Modern smart grids demand fast, intelligent, and energy-aware computing at the edge to manage real time fluctuations and ensure reliable operation. This paper introduces FOGNITE Fog-based Grid In intelligence with Neural Integration and Twin based Execution a next-generation fog cloud framework designed to enhance autonomy, resilience, and efficiency in distributed energy systems. FOGNITE combines three core components: federated learning, reinforcement learning, and digital twin validation. Each fog node trains a local CNN LSTM model on private energy consumption data, enabling predictive intelligence while preserving data privacy through federated aggregation. A reinforcement learning agent dynamically schedules tasks based on current system load and energy conditions, optimizing for performance under uncertainty. To prevent unsafe or inefficient decisions, a hierarchical digital twin layer simulates potential actions before deployment, significantly reducing execution errors and energy waste. We evaluate FOGNITE on a real world testbed of Raspberry Pi devices, showing up to a 93.7% improvement in load balancing accuracy and a 63.2% reduction in energy waste compared to conventional architectures. By shifting smart grid control from reactive correction to proactive optimization, FOGNITE represents a step toward more intelligent, adaptive, and sustainable energy infrastructures
As the advancements in the field of artificial intelligence and nonlinear optics continues new methods can be used to better describe and determine nonlinear optical phenomena. In this research we aimed to analyze the diffraction patterns of an organic material and determine the nonlinear refraction index of the material in question by utilizing ResNet 152 convolutional neural network architecture in the regions of laser power that the diffraction rings are not clearly distinguishable. This approach can open new sights for optical material characterization in situations where the conventional methods do not apply.
There are no more papers matching your filters at the moment.