general-mathematics
Researchers introduced the Optional Intervals Event (OIE) and sequential operations within an algebraic framework to formalize event execution and address limitations of observational physics. The framework provides an axiomatic definition for "simultaneous start" and explains why both concurrent and sequential events can determine a first-place finisher.
1
Let Θ\Theta denote the supremum of the real parts of the zeros of the Riemann zeta function. We demonstrate that Θ=1\Theta=1, which entails the existence of infinitely many Riemann zeros off the critical line (thus disproving the Riemann Hypothesis (RH), which asserts that $\Theta = \frac{1}{2}$). The paper is concluded by a brief discussion of why our argument doesn't work for both Weil and Beurling zeta functions whose analogues of the RH are known to be true.
The purpose of this article is to introduce the graded classical S-primary submodules which are extensions of graded classical primary submodules. We state that P is a graded classical S-primary submodule of R-module M if there exists sSs\in S such that x,yh(R)x,y \in h(R) and mh(M)m \in h(M), if xymPxym \in P, then sxmPsxm \in P or synmPsy^nm \in P for some positive integer n. Several properties and characteristics of graded classical S-primary submodules have been studied.
We show a possibility to apply certain philosophical concepts to the analysis of concrete mathematical structures. Such application gives a clear justification of topological and geometric properties of considered mathematical objects.
This article provides several theorems regarding the existence of limit for multivariable function, among which Theorem 1 and Theorem 3 relax the requirement for sequence of Heine's definition of limit. These results address the question of which paths need to be considered to determine the existence of limit for multivariable function.
Using the Klein correspondence, regular parallelisms of PG(3,R) have been described by Betten and Riesinger in terms of a dual object, called a hyperflock determining (hfd) line set. In the special case where this set has a span of dimension 3, a second dualization leads to a more convenient object, called a generalized star of lines. Both constructions have later been simplified by the author. Here we refine our simplified approach in order to obtain similar results for regular parallelisms of oriented lines. As a consequence, we can demonstrate that for oriented parallelisms, as we call them, there are distinctly more possibilities than in the non-oriented case. The proofs require a thorough analysis of orientation in projective spaces (as manifolds and as lattices) and in projective planes and, in particular, in translation planes. This is used in order to handle continuous families of oriented regular spreads in terms of the Klein model of PG(3,R). This turns out to be quite subtle. Even the definition of suitable classes of dual objects modeling oriented parallelisms is not so obvious.
This work contains different expressions for the k'th derivative of the n'th power of the trigonometric and hyperbolic sine and cosine. The first set of expressions follow from the complex definitions of the trigonometric and hyperbolic sine and cosine, and the binomial theorem. The other expressions are polynomial-based. They are perhaps less obvious, and use only polynomials in sin(x) and cos(x), or in sinh(x) and cosh(x). No sines or cosines of arguments other than x appear in these polynomial-based expressions. The final expressions are dependent only on sin(x), cos(x), sinh(x), or cosh(x) respectively when k is even; and they only have a single additional factor cos(x), sin(x), cosh(x), or sinh(x) respectively when k is odd.
There was a lot of controversy about corollary 3.12, which was described in the paper Inter-universal Teichmuller Theory III. In this article, another proof of Corollary 3.12 will be derived, where the basis of the proof will be the Erdos-Kac theorem. Also in Inter-universal Teichmuller Theory IV it was said that the theory has a strong connection with the theory of Weil cohomology, based on this connection, very important physical applications will be derived in this article: generalization of non-Abelian Hodge correspondence, non-Abelian gauge theory, proof of the mass gap based on corollary 3.12, T duality is Θ±ell\Theta^{\pm ell}NF Hodge Theater, the limit at which a black hole appears, the inflationary growth of the universe for the observer.
The strong contraction mapping, a self-mapping that the range is always a subset of the domain, admits a unique fixed-point which can be pinned down by the iteration of the mapping. We introduce a topological non-convex optimization method as an application of strong contraction mapping to achieve global minimum convergence. The strength of the approach is its robustness to local minima and initial point position.
Historically, probability theory has been studied for a long time, and Kolmogorov, Levy Ito Kiyoshi, and others have mathematically developed modern probability in conjunction with measurement theory. On the other hand, commutative algebra and algebraic geometry have historically been the subject of interdisciplinary research led by Grothendiek. Many Japanese, notably Matsumura, Hironaka, and Kodaira, have contributed to this field. This paper is an attempt to focus on the research theme of Professor Sumio Watanabe of Tokyo Institute of Technology, "Algebraic Geometry and Probability Theory," from my own perspective. The mathematical theory development starts from Kolmogorov's axioms, and the proof and introduction of "Probabilistic Algebraic Variety" are given. Problems in computation and applications, analysis by computational homology, and unsolved problems in regression problems will be introduced as applications to statistics.
Based on previous work we consturct an equation (Lagrange equation) and relate it with a system of generalized integrals and differential equations in such a way to provide useful evaluations and connections between them.
A congruum was first defined by Leonardo Pisano in 1225 and it is defined as the common difference in an arithmetic progression of three perfect squares. Later that year in his book Liber Quadratorum, Pisano proved that congruums can never perfect squares themselves, a finding that was later revisited by Pierre de Fermat in 1670. His proof is now known as Fermat's Right Triangle Theorem. In this paper, four alternative proofs to Pisano's original proof are demonstrated and offered with each proof requiring a different scope of mathematical knowledge. The proofs are by direct Diophantine analysis, parameterization of differences, Heronian triangle construction, and infinite descent. In showing these proofs, it is demonstrated that there are alternatives to the method of decomposing perfect squares as sums of odd numbers as Pisano did in his proof in 1225.
This paper investigates some particular limits involving nested floor functions. We'll prove some cases and then we'll show a more general result. Then we'll count the discontinuity points of those functions, and we'll prove a method to find them all. Surprisingly the set of the jump discontinuities of fnf_n is a subset of the set of the jump discontinuities of fn+1f_{n+1}, nZ+\forall n\in\mathbb{Z^{+}} where: \[ f_n(x)=\underbrace{\Biggl\lfloor x\Bigl\lfloor x \lfloor\dots\rfloor\Bigr\rfloor\Biggr\rfloor}_{\text{nn times}} \] Furthermore we'll give some generalizations of the result and lots of considerations; for example we'll prove that the cardinality of the set of the discontinuities of fnf_n in a given limited interval approaches infinity as nn\to\infty.
We generalise the Fundamental Theorem of Calculus to higher dimensions. Our generalisation is based on the observation that the antiderivative of a function of nn-variables is a solution of a partial differential equation of order nn generalising the classical case. The generalised Fundamental Theorem of Calculus then states that the nn-dimensional integrals over nn-dimensional axis-parallel rectangular hypercuboids is given by a combinatorial formula evaluating the antiderivative on the vertices of the hypercuboid.
We introduce a full binary directed tree structure to represent the set of natural numbers, further categorizing them into three distinct subsets: pure odd numbers, pure even numbers, and mixed numbers. We adopt a binary string representation for natural numbers and elaborate on the composite methodology encompassing odd- and even-number functions. Our analysis focuses on examining the iteration sequence (or composition) of the Collatz function and its reduced variant, which serves as an analog to the inverse function, to scrutinize the validity of the Collatz conjecture. To substantiate this conjecture, we incorporate binary strings into an algebraic formula that captures the essence of the Collatz sequence. By this means, we transform discrete powers of 2 into continuous counterparts, ultimately culminating in the smallest natural number, 1. Consequently, the sequence generated through infinite iterations of the Collatz function emerges as an eventually periodic sequence, thereby validating an enduring 87-year-old conjecture.
Through an equivalent condition on the Farey series set forth by Franel and Landau, we prove Riemann Hypothesis for the Riemann zeta-function and the Dirichlet L-function.
In keeping with the definition that biotechnology is really no more than a name given to a set of techniques and processes, the authors apply some set of fuzzy techniques to chemical industry problems such as finding the proper proportion of raw mix to control pollution, to study flow rates, to find out the better quality of products. We use fuzzy control theory, fuzzy neural networks, fuzzy relational equations, genetic algorithms to these problems for solutions. When the solution to the problem can have certain concepts or attributes as indeterminate, the only model that can tackle such a situation is the neutrosophic model. The authors have also used these models in this book to study the use of biotechnology in chemical industries. This book has six chapters. First chapter gives a brief description of biotechnology. Second chapter deals will proper proportion of mix of raw materials in cement industries to minimize pollution using fuzzy control theory. Chapter three gives the method of determination of temperature set point for crude oil in oil refineries. Chapter four studies the flow rates in chemical industries using fuzzy neutral networks. Chapter five gives the method of minimization of waste gas flow in chemical industries using fuzzy linear programming. The final chapter suggests when in these studies indeterminancy is an attribute or concept involved, the notion of neutrosophic methods can be adopted.
A general technique for proving the irrationality of the zeta constants ζ(s)\zeta(s) for odd s=2n+13s = 2n + 1 \geq 3 from the known irrationality of the beta constants L(2n+1)L(2n+1) is developed in this note. The results on the irrationality of the zeta constants ζ(2n)\zeta(2n), where n1n\geq 1, and ζ(3)\zeta(3) are well known, but the results on the irrationality for the zeta constants ζ(2n+1)\zeta(2n+1), where n2n \geq 2, are new, and these results seem to confirm that these constants are irrational numbers.
Prime factorization has been a buzzing topic in the field of number theory since time unknown. However, in recent years, alternative avenues to tackle this problem are being explored by researchers because of its direct application in the arena of cryptography. One of such applications is the cryptanalysis of RSA numbers, which requires prime factorization of large semiprimes. Based on numerical experiments, this paper proposes a conjecture on the distribution of digits on prime of infinite length. This paper infuses the theoretical understanding of primes to optimize the search space of prime factors by shrinking it upto 98.15%, which, in terms of application, has shown 26.50% increase in the success rate and 41.91% decrease of the maximum number of generations required by the genetic algorithm used traditionally in the literature. This paper also introduces a variation of the genetic algorithm named Sieve Method that is fine-tuned for factorization of big semi-primes, which was able to factor numbers up to 23 decimal digits with 84% success rate. Our findings shows that sieve methods on average has achieved 321.89% increase in success rate and 64.06% decrement in the maximum number of generations required for the algorithm to converge compared to the existing literatures.
In this paper we establish that the well-known Arithmetic System is consistent in the traditional sense. The proof is done within this Arithmetic System.
There are no more papers matching your filters at the moment.