Urmia University
The \textit{Collatz's conjecture} is an unsolved problem in mathematics. It is named after Lothar Collatz in 1973. The conjecture also known as Syrucuse conjecture or problem. Take any positive integer n n . If n n is even then divide it by 2 2 , else do "triple plus one" and get 3n+1 3n+1 . The conjecture is that for all numbers, this process converges to one. In the modular arithmetic notation, define a function f f as follows: f(x)={n2ifn0(mod2)3n+1ifn1(mod2).f(x)= \left\{ \begin{array}{lll} \frac{n}{2} &if & n\equiv 0 \pmod 2\\ 3n+1& if& n\equiv 1 \pmod 2. \end{array}\right. In this paper, we present the proof of the Collatz conjecture for many types of sets defined by the remainder theorem of arithmetic. These sets are defined in mods 6,12,24,36,48,60,72,84,96,1086, 12, 24, 36, 48, 60, 72, 84, 96, 108 and we took only odd positive remainders to work with. It is not difficult to prove that the same results are true for any mod 12m,12m, for positive integers mm.
Quantum thermometry leveraging quantum sensors is investigated with an emphasis on fundamental precision bounds derived from quantum estimation theory. The proposed sensing platform consists of two dissimilar qubits coupled via capacitor, which induce quantum oscillations in the presence of a thermal environment. Thermal equilibrium states are modeled using the Gibbs distribution. The precision limits are assessed through the Quantum Fisher Information (QFI) and the Hilbert-Schmidt Speed (HSS), serving as stringent criteria for sensor sensitivity. Systematic analysis of the dependence of QFI and HSS on tunable parameters -such as qubit energies and coupling strengths- provides optimization pathways for maximizing temperature sensitivity. Furthermore, we explore two distinct quantum thermometry paradigms: (I) local temperature estimation directly performed by Alice, who possesses the quantum sensor interfacing with the thermal bath, and (II) remote temperature estimation conducted by Bob, facilitated via quantum teleportation. In the latter scenario, temperature information encoded in the qubit state is transmitted through a single-qubit quantum thermal teleportation protocol. Our findings indicate that direct measurement yields superior sensitivity compared to remote estimation, primarily due to the inherent advantage of direct sensor-environment interaction. The analysis reveals that increasing Josephson energies diminishes sensor sensitivity, whereas augmenting the mutual coupling strength between the qubits enhances it.
This paper presents a new Lie theoretic approach to fractal calculus, which in turn yields such new results as a Fractal Noether's Theorem, a setting for fractal differential forms, for vector fields, and Lie derivatives, as well as k-fractal jet space, and algorithms for k-th fractal prolongation. The symmetries of the fractal nonlinear nn-th α\alpha-order differential equation are examined, followed by a discussion of the symmetries of the fractal linear nn-th α\alpha-order differential equation. Additionally, the symmetries of the fractal linear first α\alpha-order differential equation are derived. Several examples are provided to illustrate and highlight the details of these concepts.
Quantum thermometry leveraging quantum sensors is investigated with an emphasis on fundamental precision bounds derived from quantum estimation theory. The proposed sensing platform consists of two dissimilar qubits coupled via capacitor, which induce quantum oscillations in the presence of a thermal environment. Thermal equilibrium states are modeled using the Gibbs distribution. The precision limits are assessed through the Quantum Fisher Information (QFI) and the Hilbert-Schmidt Speed (HSS), serving as stringent criteria for sensor sensitivity. Systematic analysis of the dependence of QFI and HSS on tunable parameters -such as qubit energies and coupling strengths- provides optimization pathways for maximizing temperature sensitivity. Furthermore, we explore two distinct quantum thermometry paradigms: (I) local temperature estimation directly performed by Alice, who possesses the quantum sensor interfacing with the thermal bath, and (II) remote temperature estimation conducted by Bob, facilitated via quantum teleportation. In the latter scenario, temperature information encoded in the qubit state is transmitted through a single-qubit quantum thermal teleportation protocol. Our findings indicate that direct measurement yields superior sensitivity compared to remote estimation, primarily due to the inherent advantage of direct sensor-environment interaction. The analysis reveals that increasing Josephson energies diminishes sensor sensitivity, whereas augmenting the mutual coupling strength between the qubits enhances it.
Wind power as a renewable source of energy, has numerous economic, environmental and social benefits. In order to enhance and control renewable wind power, it is vital to utilize models that predict wind speed with high accuracy. Due to neglecting of requirement and significance of data preprocessing and disregarding the inadequacy of using a single predicting model, many traditional models have poor performance in wind speed prediction. In the current study, for predicting wind speed at target stations in the north of Iran, the combination of a multi-layer perceptron model (MLP) with the Whale Optimization Algorithm (WOA) used to build new method (MLP-WOA) with a limited set of data (2004-2014). Then, the MLP-WOA model was utilized at each of the ten target stations, with the nine stations for training and tenth station for testing (namely: Astara, Bandar-E-Anzali, Rasht, Manjil, Jirandeh, Talesh, Kiyashahr, Lahijan, Masuleh, and Deylaman) to increase the accuracy of the subsequent hybrid model. The capability of the hybrid model in wind speed forecasting at each target station was compared with the MLP model without the WOA optimizer. To determine definite results, numerous statistical performances were utilized. For all ten target stations, the MLP-WOA model had precise outcomes than the standalone MLP model. The hybrid model had acceptable performances with lower amounts of the RMSE, SI and RE parameters and higher values of NSE, WI, and KGE parameters. It was concluded that the WOA optimization algorithm can improve the prediction accuracy of MLP model and may be recommended for accurate wind speed prediction.
23 Aug 2018
The number of waveguides crossing an intersection increases with the development of complex photonic integrated circuits. Numerical simulations are presented to demonstrate that Maxwell's fish-eye (MFE) lens can be used as a multiband crossing medium. In previous designs of waveguide intersection, bends are needed before and after the intersection to adjust the crossing angle resulting in a larger footprint. The presented design incorporates the waveguide bends into the intersection which saves footprint. In this paper, 4x4 and 6x6 intersections based on ideal and graded photonic crystal (GPC) MFE lenses are investigated, where 4 and 6 waveguides intersect, respectively. The intersection based on ideal MFE lens partially covers the O, E, S, C, L, and U bands of optical communication, while the intersection based on GPC-MFE lens is optimized to cover the entire C-band. For 4x4 and 6x6 intersections based on GPC-MFE lens, crosstalk levels are below -24dB and -18dB, and the average insertion losses are 0.60dB and 0.85dB in the C-band with lenses' radii of 7xa and 10xa, respectively, where a is the lattice constant of the photonic crystal.
This study first reviews fuzzy random Portfolio selection theory and describes the concept of portfolio optimization model as a useful instrument for helping finance practitioners and researchers. Second, this paper specifically aims at applying possibility-based models for transforming the fuzzy random variables to the linear programming. The harmony search algorithm approaches to resolve the portfolio selection problem with the objective of return maximization is applied. We provide a numerical example to illustrate the proposed model. The results show that the evolutionary method of this paper with harmony search algorithm, can consistently handle the practical portfolio selection problem.
It has been constructed the quantum super fuzzy Dirac and chirality operators on q-deformed super fuzzy sphere. Using the quantum super fuzzy Ginsparg-Wilson algebra, it has been studied the q-deformed super gauged fuzzy Dirac and chirality operators in instanton sector. It has been showed that they have correct commutative limit in the limit case when noncommutative parameter l l tends to infinity and q tends to unit.
This study employs the Causal Machine Learning (CausalML) statistical method to analyze the influence of electricity pricing policies on carbon dioxide (CO2) levels in the household sector. Investigating the causality between potential outcomes and treatment effects, where changes in pricing policies are the treatment, our analysis challenges the conventional wisdom surrounding incentive-based electricity pricing. The study's findings suggest that adopting such policies may inadvertently increase CO2 intensity. Additionally, we integrate a machine learning-based meta-algorithm, reflecting a contemporary statistical approach, to enhance the depth of our causal analysis. The study conducts a comparative analysis of learners X, T, S, and R to ascertain the optimal methods based on the defined question's specified goals and contextual nuances. This research contributes valuable insights to the ongoing dialogue on sustainable development practices, emphasizing the importance of considering unintended consequences in policy formulation.
With the advancement of technologies like Industry 4.0, communication networks must meet stringent requirements of applications demanding deterministic and bounded latencies. The problem is further compounded by the need to periodically synchronize network devices to a common time reference to address clock drifts. Existing solutions often simplify the problem by assuming either perfect synchronization or a worst-case error. Additionally, these approaches delay the scheduling process in network devices until the scheduled frame is guaranteed to have arrived in the device queue, inducing additional delays to the stream. A novel approach that completely avoids queuing delays is proposed, enabling it to meet even the strictest deadline requirement. Furthermore, both approaches can be enhanced by incorporating network-derived time-synchronization information. This is not only convenient for meeting deadline requirements but also improves bandwidth efficiency.
Forming quantitative portfolios using statistical risk models presents a significant challenge for hedge funds and portfolio managers. This research investigates three distinct statistical risk models to construct quantitative portfolios of 1,000 floating stocks in the US market. Utilizing five different investment strategies, these models are tested across four periods, encompassing the last three major financial crises: The Dot Com Bubble, Global Financial Crisis, and Covid-19 market downturn. Backtests leverage the CRSP dataset from January 1990 through December 2023. The results demonstrate that the proposed models consistently outperformed market excess returns across all periods. These findings suggest that the developed risk models can serve as valuable tools for asset managers, aiding in strategic decision-making and risk management in various economic conditions.
In this work, a simple vision algorithm is designed and implemented to extract and identify the surface defects on the Golden Delicious apples caused by the enzymic browning process. 34 Golden Delicious apples were selected for the experiments, of which 17 had enzymic browning defects and the other 17 were sound. The image processing part of the proposed vision algorithm extracted the defective surface area of the apples with high accuracy of 97.15%. The area and mean of the segmented images were selected as the 2x1 feature vectors to feed into a designed artificial neural network. The analysis based on the above features indicated that the images with a mean less than 0.0065 did not belong to the defective apples; rather, they were extracted as part of the calyx and stem of the healthy apples. The classification accuracy of the neural network applied in this study was 99.19%
Network function virtualization enables network operators to implement new services through a process called service function chain mapping. The concept of Service Function Chain (SFC) is introduced to provide complex services, which is an ordered set of Network Functions (NF). The network functions of an SFC can be decomposed in several ways into some Virtual Network Functions (VNF). Additionally, the decomposed NFs can be placed (mapped) as VNFs on different machines on the underlying physical infrastructure. Selecting good decompositions and good placements among the possible options greatly affects both costs and service quality metrics. Previous research has addressed NF decomposition and VNF placement as separate problems. However, in this paper, we address both NF decomposition and VNF placement simultaneously as a single problem. Since finding an optimal solution is NP-hard, we have employed heuristic algorithms to solve the problem. Specifically, we have introduced a multiobjective decomposition and mapping VNFs (MODMVNF) method based on the non-dominated sorting genetic multi-objective algorithm (NSGAII) to solve the problem. The goal is to find near-optimal decomposition and mapping on the physical network at the same time to minimize the mapping cost and communication latency of SFC. The comparison of the results of the proposed method with the results obtained by solving ILP formulation of the problem as well as the results obtained from the multi-objective particle swarm algorithm shows the efficiency and effectiveness of the proposed method in terms of cost and communication latency.
We establish a necessary and sufficient condition for a normal subgroup of a finite group to be a subgroup perfect code.
A comprehensive understanding of heat transfer mechanisms in biological tissues is essential for the advancement of thermal therapeutic techniques and the development of accurate bioheat transfer models. Conventional models often fail to capture the inherently complex thermal behavior of biological media, necessitating more sophisticated approaches for experimental validation and parameter extraction. In this study, the Two-Dimensional Three-Phase Lag (TPL) heat transfer model, implemented via the finite difference method (FDM), was employed to extract key phase lag parameters characterizing heat conduction in bovine skin tissue. Experimental measurements were obtained using a 450 nm laser source and two non-contact infrared sensors. The influence of four critical parameters was systematically investigated: heat flux phase lag (τq\tau_{q}), temperature gradient phase lag (τθ\tau_{\theta}), thermal displacement coefficient (kk^*), and thermal displacement phase lag (τv\tau_{v}). A carefully designed experimental protocol was used to assess each parameter independently. The results revealed that the extracted phase lag values were substantially lower than those previously reported in the literature. This highlights the importance of high-precision measurements and the need to isolate each parameter during analysis. These findings contribute to the refinement of bioheat transfer models and hold potential for improving the efficacy and safety of clinical thermal therapies.
We present a novel two-qubit quantum magnetometer Hamiltonian optimized for enhanced sensitivity and noise resilience. Compared to existing models, our formulation offers advantages in accuracy, robustness against noise, and entanglement dynamics. Using analytical methods, we derive the Quantum Fisher Information (QFI) and the Signal-to-Noise Ratio (SNR), highlighting its practical viability for magnetic field sensing. Our approach bridges theoretical insights with real-world applicability. We further analyze the performance of the magnetometer with a different initial entangled state, revealing the benefits of entanglement for sensitivity. A comparative analysis with leading research in the field underscores the advancements offered by our proposed design. Finally, we discuss the limitations of our current study and suggest potential avenues for future research.
We propose a new algorithm for recovery of sparse signals from their compressively sensed samples. The proposed algorithm benefits from the strategy of gradual movement to estimate the positions of non-zero samples of sparse signal. We decompose each sample of signal into two variables, namely "value" and "detector", by a weighted exponential function. We update these new variables using gradient descent method. Like the traditional compressed sensing algorithms, the first variable is used to solve the Least Absolute Shrinkage and Selection Operator (Lasso) problem. As a new strategy, the second variable participates in the regularization term of the Lasso (l1 norm) that gradually detects the non-zero elements. The presence of the second variable enables us to extend the corresponding vector of the first variable to matrix form. This makes possible use of the correlation matrix for a heuristic search in the case that there are correlations among the samples of signal. We compare the performance of the new algorithm with various algorithms for uncorrelated and correlated sparsity. The results indicate the efficiency of the proposed methods.
This paper introduces Direct Simplified Symbolic Analysis (DSSA), a new method for simplifying analog circuits. Unlike traditional matrix- or graph-based techniques that are often slow and memory-intensive, DSSA treats the task as a modeling problem and directly extracts the most significant transfer function terms. By combining Monte Carlo simulation with a genetic algorithm, it minimizes error between simplified symbolic and exact numeric expressions. Tests on five circuits in MATLAB show strong performance, with only 0.64 dB average and 1.36 dB maximum variation in dc-gain, along with a 6.8% average pole/zero error. These results highlight DSSA as an efficient and accurate tool for symbolic circuit analysis.
We propose a new technique for adaptive identification of sparse systems based on the compressed sensing (CS) theory. We manipulate the transmitted pilot (input signal) and the received signal such that the weights of adaptive filter approach the compressed version of the sparse system instead of the original system. To this end, we use random filter structure at the transmitter to form the measurement matrix according to the CS framework. The original sparse system can be reconstructed by the conventional recovery algorithms. As a result, the denoising property of CS can be deployed in the proposed method at the recovery stage. The experiments indicate significant performance improvement of proposed method compared to the conventional LMS method which directly identifies the sparse system. Furthermore, at low levels of sparsity, our method outperforms a specialized identification algorithm that promotes sparsity.
This paper presents a new Lie theoretic approach to fractal calculus, which in turn yields such new results as a Fractal Noether's Theorem, a setting for fractal differential forms, for vector fields, and Lie derivatives, as well as k-fractal jet space, and algorithms for k-th fractal prolongation. The symmetries of the fractal nonlinear nn-th α\alpha-order differential equation are examined, followed by a discussion of the symmetries of the fractal linear nn-th α\alpha-order differential equation. Additionally, the symmetries of the fractal linear first α\alpha-order differential equation are derived. Several examples are provided to illustrate and highlight the details of these concepts.
There are no more papers matching your filters at the moment.