College of EngineeringQassim University
In this paper, we are concerned with the following elliptic equation ( SC_\varepsilon ) \qquad \begin{cases} -\Delta u = |u|^{4/(n-2)}u [\ln (e+|u|)]^\varepsilon & \hbox{ in } \Omega,\\ u = 0 & \hbox{ on }\partial \Omega, \end{cases} where Ω\Omega is a smooth bounded open domain in Rn, n3\mathbb{R}^n, \ n\geq 3 and ε>0\varepsilon >0. In Comm. Contemp. Math. (2003), Ben Ayed et al. showed that the slightly supercritical usual elliptic problem has no single peaked solution. Here we extend their result for problem (SCε)( SC_\varepsilon ) when ε\varepsilon is small enough, and that by assuming a new assumption.
A novel knowledge distillation framework enables transfer of Transformer-based language model capabilities to xLSTM architectures through ∆-distillation and Frobenius norm regularization, achieving comparable performance with significantly reduced computational complexity and parameter count while training on only 512M tokens.
The exponential growth of scientific literature has resulted in information overload, challenging researchers to effectively synthesize relevant publications. This paper explores the integration of traditional reference management software with advanced computational techniques, including Large Language Models and Retrieval-Augmented Generation. We introduce PyZoBot, an AI-driven platform developed in Python, incorporating Zoteros reference management with OpenAIs sophisticated LLMs. PyZoBot streamlines knowledge extraction and synthesis from extensive human-curated scientific literature databases. It demonstrates proficiency in handling complex natural language queries, integrating data from multiple sources, and meticulously presenting references to uphold research integrity and facilitate further exploration. By leveraging LLMs, RAG, and human expertise through a curated library, PyZoBot offers an effective solution to manage information overload and keep pace with rapid scientific advancements. The development of such AI-enhanced tools promises significant improvements in research efficiency and effectiveness across various disciplines.
Accurate prediction of student performance is essential for enabling timely academic interventions. However, most machine learning models used in educational settings are static and lack the ability to adapt when new data such as post-intervention outcomes become available. To address this limitation, we propose a Feedback-Driven Decision Support System (DSS) with a closed-loop architecture that enables continuous model refinement. The system employs a LightGBM-based regressor with incremental retraining, allowing educators to input updated student performance data, which automatically triggers model updates. This adaptive mechanism enhances prediction accuracy by learning from real-world academic progress over time. The platform features a Flask-based web interface to support real-time interaction and integrates SHAP (SHapley Additive exPlanations) for model interpretability, ensuring transparency and trustworthiness in predictions. Experimental results demonstrate a 10.7% reduction in RMSE after retraining, with consistent upward adjustments in predicted scores for students who received interventions. By transforming static predictive models into self-improving systems, our approach advances educational analytics toward human-centered, data-driven, and responsive artificial intelligence. The framework is designed for seamless integration into Learning Management Systems (LMS) and institutional dashboards, facilitating practical deployment in real educational environments.
MoxE (Mixture of xLSTM Experts) introduces a language model architecture that combines xLSTM-based sequence mixing with a sparse mixture of experts and an entropy-aware routing mechanism. This approach enables efficient computation by intelligently allocating resources based on token difficulty, achieving lower perplexity on generalization tasks like Lambada OpenAI while maintaining efficiency.
This paper proposes an novel knowledge-driven approach for resource allocation in device-to-device (D2D) networks using a graph neural network (GNN) architecture. To meet the millisecond-level timeliness and scalability required for the dynamic network environment, our proposed approach incorporates the deep unrolling of the weighted minimum mean square error (WMMSE) algorithm, referred to as domain knowledge, into GNN, thereby reducing computational delay and sample complexity while adapting to various data distributions. Specifically, the aggregation and update functions in the GNN architecture are designed by utilizing the summation and power calculation components of the WMMSE algorithm, which leads to improved model generalization and interpretabiliy. Theoretical analysis of the proposed approach reveals its capability to simplify intricate end-to-end mappings and diminish the model exploration space, resulting in increased network expressiveness and enhanced optimization performance. Simulation results demonstrate the robustness, scalability, and strong performance of the proposed knowledge-driven resource allocation approach across diverse communication topologies without retraining. Our findings contribute to the development of efficient and scalable wireless resource management solutions for distributed and dynamic networks with strict latency requirements.
Given the importance of forests and their role in maintaining the ecological balance, which directly affects the planet, the climate, and the life on this planet, this research presents the problem of forest fire monitoring using drones. The forest monitoring process is performed continuously to track any changes in the monitored region within the forest. During fires, drones' capture data is used to increase the follow-up speed and enhance the control process of these fires to prevent their spread. The time factor in such problems determines the success rate of the fire extinguishing process, as appropriate data at the right time may be the decisive factor in controlling fires, preventing their spread, extinguishing them, and limiting their losses. Therefore, this research presented the problem of monitoring task scheduling for drones in the forest monitoring system. This problem is solved by developing several algorithms with the aim of minimizing the total completion time required to carry out all the drones' assigned tasks. System performance is measured by using 990 instances of three different classes. The performed experimental results indicated the effectiveness of the proposed algorithms and their ability to act efficiently to achieve the desired goal. The algorithm RIDRID achieved the best performance with a percentage rate of up to 90.3% with a time of 0.088 seconds.
Mathematical morphology (MM) helps to describe and analyze shapes using set theory. MM can be effectively applied to binary images which are treated as sets. Basic morphological operators defined can be used as an effective tool in image processing. Morphological operators are also developed based on graph and hypergraph. These operators have found better performance and applications in image processing. Bino et al. [8], [9] developed the theory of morphological operators on hypergraph. A hypergraph structure is considered and basic morphological operation erosion/dilation is defined. Several new operators opening/closing and filtering are also defined on the hypergraphs. Hypergraph based filtering have found comparatively better performance with morphological filters based on graph. In this paper we evaluate the effectiveness of hypergraph based ASF on binary images. Experimental results shows that hypergraph based ASF filters have outperformed graph based ASF.
For a graph Q=(V,E)\mathbb{Q}=(\mathbb{V},\mathbb{E}), the transformation graphs are defined as graphs with vertex set being V(Q)E(Q)\mathbb{V(Q)} \cup \mathbb{E(Q)} and edge set is described following certain conditions. In comparison to the structure descriptor of the original graph Q\mathbb{Q}, the topological descriptor of its transformation graphs displays distinct characteristics related to structure. Thus, a compound's transformation graphs descriptors can be used to model a variety of structural features of the underlying chemical. In this work, the concept of transformation graphs are extended giving rise to novel class of graphs, the (r,s)(r,s)- generalised transformation graphs, whose vertex set is union of rr copies of V(Q)\mathbb{V(Q)} and ss copies of E(Q)\mathbb{E(Q)}, where, r,sNr, s \in N and the edge set are defined under certain conditions. Further, these class of graphs are analysed with the help of first Zagreb index. Mainly, there are eight transformation graphs based on the criteria for edge set, but under the concept of (r,s)(r,s)- generalised transformation graphs, infinite number of graphs can be described and analysed.
Surface roughness is primary measure of pavement performance that has been associated with ride quality and vehicle operating costs. Of all the surface roughness indicators, the International Roughness Index (IRI) is the most widely used. However, it is costly to measure IRI, and for this reason, certain road classes are excluded from IRI measurements at a network level. Higher levels of distresses are generally associated with higher roughness. However, for a given roughness level, pavement data typically exhibits a great deal of variability in the distress types, density, and severity. It is hypothesized that it is feasible to estimate the IRI of a pavement section given its distress types and their respective densities and severities. To investigate this hypothesis, this paper uses data from in-service pavements and machine learning methods to ascertain the extent to which IRI can be predicted given a set of pavement attributes. The results suggest that machine learning can be used reliably to estimate IRI based on the measured distress types and their respective densities and severities. The analysis also showed that IRI estimated this way depends on the pavement type and functional class. The paper also includes an exploratory section that addresses the reverse situation, that is, estimating the probability of pavement distress type distribution and occurrence severity/extent based on a given roughness level.
In this paper, we use Yukawa theory to calculate differential and total cross-sections for elastic and inelastic scattering in nucleon-nucleon interactions. We start from the fundamental Lagrangian and derive the TT-matrix and hence the invariant scattering matrix leading towards the differential and total cross section. We perform calculations utilizing two types of Yukawa interaction terms: 1) scalar current, and 2) pseudoscalar current. We also carry out calculations in a laboratory frame. Calculated results using scalar current exhibits the exact features of elastic and inelastic scattering. The results agree very well with the differential cross-sections for experimental data. The pseudoscalar current calculations do not offer reasonable results and features of the NN interactions. The simplicity of the theory makes it all the more impressive, thus it can be used in place of more complicated field theories to describe NN interactions.
Massive electrodynamics for London's superconductivity and Josephson effect are derived. The propagation of massive boson inside a medium yields electric phenomena that are reflected in the Josephson effect. Critical force, magnetic field and temperature are found to be related to the critical current of the junction. The mass of the boson depends only on the critical current of the junction. The electromagnetic interaction between the Cooper pairs in the two sides of the superconductor in the josephson junction is mediated by a massive boson. The propagation of the electromagnetic waves mediated by the massive bosons gives rise to the electric properties of the Josephson junction. Of these properties are a quantized resistance of Hall type corresponding to a non-quantized magnetic flux, and a quantized capacitance. A non zero magnetic flux encompassing a magnetic charge is found to arise despite the fact that it is not a priori assumed.
A wide range of biomedical applications requires enhancement, detection, quantification and modelling of curvilinear structures in 2D and 3D images. Curvilinear structure enhancement is a crucial step for further analysis, but many of the enhancement approaches still suffer from contrast variations and noise. This can be addressed using a multiscale approach that produces a better quality enhancement for low contrast and noisy images compared with a single-scale approach in a wide range of biomedical images. Here, we propose the Multiscale Top-Hat Tensor (MTHT) approach, which combines multiscale morphological filtering with a local tensor representation of curvilinear structures in 2D and 3D images. The proposed approach is validated on synthetic and real data and is also compared to the state-of-the-art approaches. Our results show that the proposed approach achieves high-quality curvilinear structure enhancement in synthetic examples and in a wide range of 2D and 3D images.
The existence of an unbounded sequence of solutions to a conformally invariant elliptic equation having nonlocal critical-power nonlinearity is established. The primary obstacle to establishing existence of solutions is the failure of compactness in the Sobolev embedding. To overcome this obstacle, the problem under consideration is lifted to an equivalent problem on the standard sphere so that the symmetries of the sphere can be leveraged. Two classes of symmetries are considered and for each class of symmetries, an unbounded sequence of solutions to the lifted problem with the prescribed symmetries is produced. One class of symmetries always exists and the corresponding solutions are guaranteed to be sign-changing whenever a suitable relationship between the dimension and the nonlocality parameter holds. The other class of symmetries need not always exist but when it exists, the corresponding solutions are guaranteed to be sign-changing.
Myocardial infarction (MI), commonly known as a heart attack, is a critical health condition caused by restricted blood flow to the heart. Early-stage detection through continuous ECG monitoring is essential to minimize irreversible damage. This review explores advancements in MI classification methodologies for wearable devices, emphasizing their potential in real-time monitoring and early diagnosis. It critically examines traditional approaches, such as morphological filtering and wavelet decomposition, alongside cutting-edge techniques, including Convolutional Neural Networks (CNNs) and VLSI-based methods. By synthesizing findings on machine learning, deep learning, and hardware innovations, this paper highlights their strengths, limitations, and future prospects. The integration of these techniques into wearable devices offers promising avenues for efficient, accurate, and energy-aware MI detection, paving the way for next-generation wearable healthcare solutions.
Robotic Process Automation (RPA) has emerged as a game-changing technology in data extraction, revolutionizing the way organizations process and analyze large volumes of documents such as invoices, purchase orders, and payment advices. This study investigates the use of RPA for structured data extraction and evaluates its advantages over manual processes. By comparing human-performed tasks with those executed by RPA software bots, we assess efficiency and accuracy in data extraction from invoices, focusing on the effectiveness of the RPA system. Through four distinct scenarios involving varying numbers of invoices, we measure efficiency in terms of time and effort required for task completion, as well as accuracy by comparing error rates between manual and RPA processes. Our findings highlight the significant efficiency gains achieved by RPA, with bots completing tasks in significantly less time compared to manual efforts across all cases. Moreover, the RPA system consistently achieves perfect accuracy, mitigating the risk of errors and enhancing process reliability. These results underscore the transformative potential of RPA in optimizing operational efficiency, reducing human labor costs, and improving overall business performance.
Symmetrizable matrices are those which are symmetric when multiplied by a diagonal matrix with positive entries. The Cauchy interlace theorem states that the eigenvalues of a real symmetric matrix interlace with those of any principal submatrix (obtained by deleting a row-column pair of the original matrix). In this paper we extend the Cauchy interlace theorem for symmetric matrices to this large class, called symmetrizable matrices. This extension is interesting by the fact that in the symmetric case, the Cauchy interlace theorem together with the Courant-Fischer minimax theorem and Sylvester's law of inertia, each one can be proven from the others and thus they are essentially equivalent. The first two theorems have important applications in the singular value and eigenvalue decompositions, the third is useful in the development and analysis of algorithms for the symmetric eigenvalue problem. Consequently various and several applications whom are contingent on the symmetric condition may occur for this large class of not necessary symmetric matrices and open the door for many applications in future studies. We note that our techniques are based on the celebrated Dodgson's identity.
Context: Application Programming Interface (API) code examples are an essential knowledge resource for learning APIs. However, a few user studies have explored how the structural characteristics of the source code in code examples impact their comprehensibility and reusability. Objectives: We investigated whether the (a) linearity and (b) length of the source code in API code examples affect users performance in terms of correctness and time spent. We also collected subjective ratings. Methods: We conducted an online controlled code comprehension experiment with 61 Java developers. As a case study, we used the API code examples from the Joda-Time Java library. We had participants perform code comprehension and reuse tasks on variants of the example with different lengths and degrees of linearity. Findings: Participants demonstrated faster reaction times when exposed to linear code examples. However, no substantial differences in correctness or subjective ratings were observed. Implications: Our findings suggest that the linear presentation of a source code may enhance initial example understanding and reusability. This, in turn, may provide API developers with some insights into the effective structuring of their API code examples. However, we highlight the need for further investigation.
In pro-drop language like Arabic, Chinese, Italian, Japanese, Spanish, and many others, unrealized (null) arguments in certain syntactic positions can refer to a previously introduced entity, and are thus called anaphoric zero pronouns. The existing resources for studying anaphoric zero pronoun interpretation are however still limited. In this paper, we use five data augmentation methods to generate and detect anaphoric zero pronouns automatically. We use the augmented data as additional training materials for two anaphoric zero pronoun systems for Arabic. Our experimental results show that data augmentation improves the performance of the two systems, surpassing the state-of-the-art results.
This paper gives some theory and efficient design of binary block systematic codes capable of controlling the deletions of the symbol ``00'' (referred to as 00-deletions) and/or the insertions of the symbol ``00'' (referred to as 00-insertions). The problem of controlling 00-deletions and/or 00-insertions (referred to as 00-errors) is known to be equivalent to the efficient design of L1L_{1} metric asymmetric error control codes over the natural alphabet, N\mathbb{N}. So, tt 00-insertion correcting codes can actually correct tt 00-errors, detect (t+1)(t+1) 00-errors and, simultaneously, detect all occurrences of only 00-deletions or only 00-insertions in every received word (briefly, they are tt-Symmetric 00-Error Correcting/(t+1)(t+1)-Symmetric 00-Error Detecting/All Unidirectional 00-Error Detecting (tt-Sy00EC/(t+1)(t+1)-Sy00ED/AU00ED) codes). From the relations with the L1L_{1} distance, optimal systematic code designs are given. In general, for all t,kNt,k\in\mathbb{N}, a recursive method is presented to encode kk information bits into efficient systematic tt-Sy00EC/(t+1)(t+1)-Sy00ED/AU00ED codes of length nk+tlog2k+o(tlogn) n\leq k+t\log_{2}k+o(t\log n) as nNn\in\mathbb{N} increases. Decoding can be efficiently performed by algebraic means using the Extended Euclidean Algorithm (EEA).
There are no more papers matching your filters at the moment.