Mansoura University
Traditional fish farming practices often lead to inefficient feeding, resulting in environmental issues and reduced productivity. We developed an innovative system combining computer vision and IoT technologies for precise Tilapia feeding. Our solution uses real-time IoT sensors to monitor water quality parameters and computer vision algorithms to analyze fish size and count, determining optimal feed amounts. A mobile app enables remote monitoring and control. We utilized YOLOv8 for keypoint detection to measure Tilapia weight from length, achieving \textbf{94\%} precision on 3,500 annotated images. Pixel-based measurements were converted to centimeters using depth estimation for accurate feeding calculations. Our method, with data collection mirroring inference conditions, significantly improved results. Preliminary estimates suggest this approach could increase production up to 58 times compared to traditional farms. Our models, code, and dataset are open-source~\footnote{The code, dataset, and models are available upon reasonable request.
Finite element modeling is a well-established tool for structural analysis, yet modeling complex structures often requires extensive pre-processing, significant analysis effort, and considerable time. This study addresses this challenge by introducing an innovative method for real-time prediction of structural static responses using DeepOnet which relies on a novel approach to physics-informed networks driven by structural balance laws. This approach offers the flexibility to accurately predict responses under various load classes and magnitudes. The trained DeepONet can generate solutions for the entire domain, within a fraction of a second. This capability effectively eliminates the need for extensive remodeling and analysis typically required for each new case in FE modeling. We apply the proposed method to two structures: a simple 2D beam structure and a comprehensive 3D model of a real bridge. To predict multiple variables with DeepONet, we utilize two strategies: a split branch/trunk and multiple DeepONets combined into a single DeepONet. In addition to data-driven training, we introduce a novel physics-informed training approaches. This method leverages structural stiffness matrices to enforce fundamental equilibrium and energy conservation principles, resulting in two novel physics-informed loss functions: energy conservation and static equilibrium using the Schur complement. We use various combinations of loss functions to achieve an error rate of less than 5% with significantly reduced training time. This study shows that DeepONet, enhanced with hybrid loss functions, can accurately and efficiently predict displacements and rotations at each mesh point, with reduced training time.
The available tools for damage identification in civil engineering structures are known to be computationally expensive and data-demanding. This paper proposes a comprehensive machine learning based damage identification (CMLDI) method that integrates modal analysis and dynamic analysis strategies. The proposed approach is applied to a real structure - KW51 railway bridge in Leuven. CMLDI diligently combines signal processing, machine learning (ML), and structural analysis techniques to achieve a fast damage identification solver that relies on minimal monitoring data. CMLDI considers modal analysis inputs and extracted features from acceleration responses to inform the damage identification based on the long-term and short-term monitoring data. Results of operational modal analysis, through the analysis of long-term monitoring data, are analyzed using pre-trained k-nearest neighbor (kNN) classifiers to identify damage existence, location, and magnitude. A well-crafted assembly of signal processing and ML methods is used to analyze acceleration time histories. Stacked gated recurrent unit (Stacked GRU) networks are used to identify damage existence, kNN classifiers are used to identify damage magnitude, and convolutions neural networks (CNN) are used to identify damage location. The damage identification results for the KW51 bridge demonstrate this approach's high accuracy, efficiency, and robustness. In this work, the training data is retrieved from the sensor of the KW51 bridge as well as the numerical finite element model (FEM). The proposed approach presents a systematic path to the generation of training data using a validated FEM. The data generation relies on modeling combinations of damage locations and magnitudes along the bridge.
The liver is one of the most critical metabolic organs in vertebrates due to its vital functions in the human body, such as detoxification of the blood from waste products and medications. Liver diseases due to liver tumors are one of the most common mortality reasons around the globe. Hence, detecting liver tumors in the early stages of tumor development is highly required as a critical part of medical treatment. Many imaging modalities can be used as aiding tools to detect liver tumors. Computed tomography (CT) is the most used imaging modality for soft tissue organs such as the liver. This is because it is an invasive modality that can be captured relatively quickly. This paper proposed an efficient automatic liver segmentation framework to detect and segment the liver out of CT abdomen scans using the 3D CNN DeepMedic network model. Segmenting the liver region accurately and then using the segmented liver region as input to tumors segmentation method is adopted by many studies as it reduces the false rates resulted from segmenting abdomen organs as tumors. The proposed 3D CNN DeepMedic model has two pathways of input rather than one pathway, as in the original 3D CNN model. In this paper, the network was supplied with multiple abdomen CT versions, which helped improve the segmentation quality. The proposed model achieved 94.36%, 94.57%, 91.86%, and 93.14% for accuracy, sensitivity, specificity, and Dice similarity score, respectively. The experimental results indicate the applicability of the proposed method.
Due to the increasing requirements for transmission of images in computer, mobile environments, the research in the field of image compression has increased significantly. Image compression plays a crucial role in digital image processing, it is also very important for efficient transmission and storage of images. When we compute the number of bits per image resulting from typical sampling rates and quantization methods, we find that Image compression is needed. Therefore development of efficient techniques for image compression has become necessary .This paper is a survey for lossy image compression using Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image applications and describes all the components of it.
Motivated by the widespread increase in the phenomenon of code-switching between Egyptian Arabic and English in recent times, this paper explores the intricacies of machine translation (MT) and automatic speech recognition (ASR) systems, focusing on translating code-switched Egyptian Arabic-English to either English or Egyptian Arabic. Our goal is to present the methodologies employed in developing these systems, utilizing large language models such as LLama and Gemma. In the field of ASR, we explore the utilization of the Whisper model for code-switched Egyptian Arabic recognition, detailing our experimental procedures including data preprocessing and training techniques. Through the implementation of a consecutive speech-to-text translation system that integrates ASR with MT, we aim to overcome challenges posed by limited resources and the unique characteristics of the Egyptian Arabic dialect. Evaluation against established metrics showcases promising results, with our methodologies yielding a significant improvement of 56%56\% in English translation over the state-of-the-art and 9.3%9.3\% in Arabic translation. Since code-switching is deeply inherent in spoken languages, it is crucial that ASR systems can effectively handle this phenomenon. This capability is crucial for enabling seamless interaction in various domains, including business negotiations, cultural exchanges, and academic discourse. Our models and code are available as open-source resources. Code: \url{http://github.com/ahmedheakl/arazn-llm}}, Models: \url{http://huggingface.co/collections/ahmedheakl/arazn-llm-662ceaf12777656607b9524e}.
For a graph GG with edge set EE, let d(u)d(u) denote the degree of a vertex uu in GG. The diminished Sombor (DSO) index of GG is defined as DSO(G)=uvE(d(u))2+(d(v))2(d(u)+d(v))1DSO(G)=\sum_{uv\in E}\sqrt{(d(u))^2+(d(v))^2}(d(u)+d(v))^{-1}. The cyclomatic number of a graph is the smallest number of edges whose removal makes the graph acyclic. A connected graph of maximum degree at most 44 is known as a molecular graph. The primary motivation of the present study comes from a conjecture concerning the minimum DSO index of fixed-order connected graphs with cyclomatic number 33, posed in the recent paper [F. Movahedi, I. Gutman, I. Redžepović, B. Furtula, Diminished Sombor index, MATCH Commun. Comput. Chem. 95 (2026) 141--162]. The present paper gives all graphs minimizing the DSO index among all molecular graphs of order nn with cyclomatic number \ell, provided that n2(1)4n\ge 2(\ell-1)\ge4.
The correspondence between the monotonicity of a (possibly) set-valued operator and the firm nonexpansiveness of its resolvent is a key ingredient in the convergence analysis of many optimization algorithms. Firmly nonexpansive operators form a proper subclass of the more general - but still pleasant from an algorithmic perspective - class of averaged operators. In this paper, we introduce the new notion of conically nonexpansive operators which generalize nonexpansive mappings. We characterize averaged operators as being resolvents of comonotone operators under appropriate scaling. As a consequence, we characterize the proximal point mappings associated with hypoconvex functions as cocoercive operators, or equivalently; as displacement mappings of conically nonexpansive operators. Several examples illustrate our analysis and demonstrate tightness of our results.
Extended MHD is a one-fluid model that incorporates two-fluid effects such as electron inertia and the Hall drift. This model is used to construct fully nonlinear Alfv\'enic wave solutions, and thereby derive the kinetic and magnetic spectra by resorting to a Kolmogorov-like hypothesis based on the constant cascading rates of the energy and generalized helicities of this model. The magnetic and kinetic spectra are derived in the ideal $\left(k < 1/\lambda_i\right),Hall, Hall \left(1/\lambda_i < k < 1/\lambda_e \right)$, and electron inertia \left(k &gt; 1/\lambda_e\right) regimes; kk is the wavenumber and λs=c/ωps\lambda_s = c/\omega_{p s} is the skin depth of species `ss'. In the Hall regime, it is shown that the emergent results are fully consistent with previous numerical and analytical studies, especially in the context of the solar wind. The focus is primarily on the electron inertia regime, where magnetic energy spectra with power-law indexes of 11/3-11/3 and 13/3-13/3 are always recovered. The latter, in particular, is quite close to recent observational evidence from the solar wind with a potential slope of approximately 4-4 in this regime. It is thus plausible that these spectra may constitute a part of the (extended) inertial range, as opposed to the standard `dissipation' range paradigm.
The recent formulations of multi-region relaxed magnetohydrodynamics (MRxMHD) have generalized the famous Woltjer-Taylor states by incorporating a collection of "ideal barriers" that prevent global relaxation, and flow. In this paper, we generalize MRxMHD with flow to include Hall effects (MRxHMHD), and thereby obtain the partially relaxed counterparts of the famous double Beltrami states as a special subset. The physical and mathematical consequences arising from the introduction of the Hall term are also presented. We demonstrate that our results (in the ideal MHD limit) constitute an important subset of ideal MHD equilibria, and we compare our approach against other variational principles proposed for deriving the partially relaxed states.
02 Jun 2016
This paper introduces a new three-parameters model called the Weibull-G exponential distribution (WGED) distribution which exhibits bathtub-shaped hazard rate. Some of it's statistical properties are obtained including quantile, moments, generating functions, reliability and order statistics. The method of maximum likelihood is used for estimating the model parameters and the observed Fisher's information matrix is derived. We illustrate the usefulness of the proposed model by applications to real data.
Finite element (FE) modeling is essential for structural analysis but remains computationally intensive, especially under dynamic loading. While operator learning models have shown promise in replicating static structural responses at FEM level accuracy, modeling dynamic behavior remains more challenging. This work presents a Multiple Input Operator Network (MIONet) that incorporates a second trunk network to explicitly encode temporal dynamics, enabling accurate prediction of structural responses under moving loads. Traditional DeepONet architectures using recurrent neural networks (RNNs) are limited by fixed time discretization and struggle to capture continuous dynamics. In contrast, MIONet predicts responses continuously over both space and time, removing the need for step wise modeling. It maps scalar inputs including load type, velocity, spatial mesh, and time steps to full field structural responses. To improve efficiency and enforce physical consistency, we introduce a physics informed loss based on dynamic equilibrium using precomputed mass, damping, and stiffness matrices, without solving the governing PDEs directly. Further, a Schur complement formulation reduces the training domain, significantly cutting computational costs while preserving global accuracy. The model is validated on both a simple beam and the KW-51 bridge, achieving FEM level accuracy within seconds. Compared to GRU based DeepONet, our model offers comparable accuracy with improved temporal continuity and over 100 times faster inference, making it well suited for real-time structural monitoring and digital twin applications.
A machine learning technique is used to fit multiplicity distributions in high-energy proton-proton collisions and applied to make predictions for collisions at higher energies. The method is tested with Monte Carlo event generator events. Charged-particle multiplicity and transverse-momentum distributions within different pseudorapidity intervals in proton-proton collisions were simulated using the PYTHIA event generator for center of mass energies s\sqrt{s}= 0.9, 2.36, 2.76, 5, 7, 8, 13 TeV for model training and validation and at 10, 20, 27, 50, 100 and 150 TeV for model predictions. Comparisons are made in order to ensure the model reproduces the relation input variables and output distributions for the charged particle multiplicity and transverse-momentum. The multiplicity and transverse-momentum distributions are described and predicted very well, not only in the case of the trained but also in the untrained energy values. The study proposes a way to predict multiplicity distributions at a new energy by extrapolating the information inherent in the lower energy data. Using real data instead of Monte Carlo, as measured at the LHC, the technique has the potential to project the multiplicity distributions for different intervals at very high collision energies, e.g. 27 TeV or 100 TeV for the upgraded HE-LHC and FCC-hh respectively, using only data collected at the LHC, i.e. at center of mass energies from 0.9 up to 13 TeV.
Prostate cancer is the most dangerous cancer diagnosed in men worldwide. Prostate diagnosis has been affected by many factors, such as lesion complexity, observer visibility, and variability. Many techniques based on Magnetic Resonance Imaging (MRI) have been used for prostate cancer identification and classification in the last few decades. Developing these techniques is crucial and has a great medical effect because they improve the treatment benefits and the chance of patients' survival. A new technique that depends on MRI has been proposed to improve the diagnosis. This technique consists of two stages. First, the MRI images have been preprocessed to make the medical image more suitable for the detection step. Second, prostate cancer identification has been performed based on a pre-trained deep learning model, InceptionResNetV2, that has many advantages and achieves effective results. In this paper, the InceptionResNetV2 deep learning model used for this purpose has average accuracy equals to 89.20%, and the area under the curve (AUC) equals to 93.6%. The experimental results of this proposed new deep learning technique represent promising and effective results compared to other previous techniques.
Business processes evolve over time to adapt to changing business environments. This requires continuous monitoring of business processes to gain insights into whether they conform to the intended design or deviate from it. The situation when a business process changes while being analysed is denoted as Concept Drift. Its analysis is concerned with studying how a business process changes, in terms of detecting and localising changes and studying the effects of the latter. Concept drift analysis is crucial to enable early detection and management of changes, that is, whether to promote a change to become part of an improved process, or to reject the change and make decisions to mitigate its effects. Despite its importance, there exists no comprehensive framework for analysing concept drift types, affected process perspectives, and granularity levels of a business process. This article proposes the CONcept Drift Analysis in Process Mining (CONDA-PM) framework describing phases and requirements of a concept drift analysis approach. CONDA-PM was derived from a Systematic Literature Review (SLR) of current approaches analysing concept drift. We apply the CONDA-PM framework on current approaches to concept drift analysis and evaluate their maturity. Applying CONDA-PM framework highlights areas where research is needed to complement existing efforts.
Wireless sensor network (WSN) based SHM systems have shown significant improvement as compared to traditional wired-SHM systems in terms of cost, accuracy, and reliability of the monitoring. However, due to the resource-constrained nature of the sensor nodes, it is a challenge to process a large amount of sensed vibration data in real-time. Existing mechanisms of data processing are centralized and use cloud or remote servers to analyze the data to characterize the state of the bridge, i.e., healthy or damaged. These methods are feasible for wired-SHM systems, however, transmitting huge data-sets in WSNs has been found to be arduous. In this paper, we propose a mechanism named as ``in-network damage detection on edge (INDDE)" which extracts the statistical features from raw acceleration measurements corresponding to the healthy condition of the bridge and use them to train a probabilistic model, i.e., estimating the probability density function (PDF) of multivariate Gaussian distribution. The trained model helps to identify the anomalous behaviour of the new data points collected from the unknown condition of the bridge in real-time. Each edge device classifies the condition of the bridge as either "healthy" or "damaged" around its deployment region depending on their respective trained model. Experimentation results showcase a promising 96-100% damage detection accuracy with the advantage of no data transmission from sensor nodes to the cloud for processing.
Finding zeros of the sum of two maximally monotoneoperators involving a continuous linear operator is a central problem in optimization and monotone operator theory. We revisit the duality framework proposed by Eckstein, Ferris, Pennanen, and Robinson from a quarter of a century ago. Paramonotonicity is identified as a broad condition ensuring that saddle points coincide with the closed convex rectangle formed by the primal and dual solutions. Additionally, we characterize total duality in the subdifferential setting and derive projection formulas for sets that arise in the analysis of the Chambolle-Pock algorithm within the recent framework developed by Bredies, Chenchene, Lorenz, and Naldi.
11 May 2018
In this paper, a new bivariate discrete distribution is introduced which called bivariate discrete exponentiated Weibull (BDEW) distribution. Several of its mathematical statistical properties are derived such as the joint cumulative distribution function, the joint joint hazard rate function, probability mass function, joint moment generating function, mathematical expectation and reliability function for stress-strength model. Further, the parameters of the BDEW distribution are estimated by the maximum likelihood method. Two real data sets are analyzed, and it was found that the BDEW distribution provides better fit than other discrete distributions.
This paper presents a new unsupervised learning approach with stacked autoencoder (SAE) for Arabic handwritten digits categorization. Recently, Arabic handwritten digits recognition has been an important area due to its applications in several fields. This work is focusing on the recognition part of handwritten Arabic digits recognition that face several challenges, including the unlimited variation in human handwriting and the large public databases. Arabic digits contains ten numbers that were descended from the Indian digits system. Stacked autoencoder (SAE) tested and trained the MADBase database (Arabic handwritten digits images) that contain 10000 testing images and 60000 training images. We show that the use of SAE leads to significant improvements across different machine-learning classification algorithms. SAE is giving an average accuracy of 98.5%.
For a mobile robot to be truly autonomous, it must solve the simultaneous localization and mapping (SLAM) problem. We develop a new metaheuristic algorithm called Simulated Tom Thumb (STT), based on the detailed adventure of the clever Tom Thumb and advances in researches relating to path planning based on potential functions. Investigations show that it is very promising and could be seen as an optimization of the powerful solution of SLAM with data association and learning capabilities. STT outperform JCBB. The performance is 100 % match.
There are no more papers matching your filters at the moment.