University of Nebraska Omaha
Researchers at the University of Nebraska Omaha introduced the first deep learning approach for text-in-image watermarking, employing Transformer and Vision Transformer architectures to embed and extract text from images. The method demonstrated superior imperceptibility (SSIM 97.33%) and robustness against various distortions, outperforming traditional and other deep learning techniques.
FC-KAN proposes a Kolmogorov-Arnold Network architecture that combines outputs from multiple mathematical functions to enhance feature capture and improve performance in image classification. The research demonstrates that specific element-wise quadratic combinations lead to higher accuracy on MNIST and Fashion-MNIST datasets compared to other KAN variants and Multi-Layer Perceptrons.
27
Researchers from the University of Nebraska Omaha and collaborators introduced DepressionEmo, a new multilabel dataset of 6,037 Reddit posts annotated for eight distinct depression-related emotions, designed to move beyond binary classification. Benchmarking showed transformer-based models, especially BART, achieved an F1-Macro score of 0.76 for emotion classification on this dataset.
24
We propose MAE-SAM2, a novel foundation model for retinal vascular leakage segmentation on fluorescein angiography images. Due to the small size and dense distribution of the leakage areas, along with the limited availability of labeled clinical data, this presents a significant challenge for segmentation tasks. Our approach integrates a Self-Supervised learning (SSL) strategy, Masked Autoencoder (MAE), with SAM2. In our implementation, we explore different loss functions and conclude a task-specific combined loss. Extensive experiments and ablation studies demonstrate that MAE-SAM2 outperforms several state-of-the-art models, achieving the highest Dice score and Intersection-over-Union (IoU). Compared to the original SAM2, our model achieves a 5%5\% performance improvement, highlighting the promise of foundation models with self-supervised pretraining in clinical imaging tasks.
Foreign Object Debris (FOD) detection has attracted increased attention in the area of machine learning and computer vision. However, a robust and publicly available image dataset for FOD has not been initialized. To this end, this paper introduces an image dataset of FOD, named FOD in Airports (FOD-A). FOD-A object categories have been selected based on guidance from prior documentation and related research by the Federal Aviation Administration (FAA). In addition to the primary annotations of bounding boxes for object detection, FOD-A provides labeled environmental conditions. As such, each annotation instance is further categorized into three light level categories (bright, dim, and dark) and two weather categories (dry and wet). Currently, FOD-A has released 31 object categories and over 30,000 annotation instances. This paper presents the creation methodology, discusses the publicly available dataset extension process, and demonstrates the practicality of FOD-A with widely used machine learning models for object detection.
This paper presents a comprehensive survey on deep learning-based image watermarking, a technique that entails the invisible embedding and extraction of watermarks within a cover image, aiming to offer a seamless blend of robustness and adaptability. We navigate the complex landscape of this interdisciplinary domain, linking historical foundations, current innovations, and prospective developments. Unlike existing literature, our study concentrates exclusively on image watermarking with deep learning, delivering an in-depth, yet brief analysis enriched by three fundamental contributions. First, we introduce a refined categorization, segmenting the field into Embedder-Extractor, Deep Networks as a Feature Transformation, and Hybrid Methods. This taxonomy, inspired by the varied roles of deep learning across studies, is designed to infuse clarity, offering readers technical insights and directional guidance. Second, our exploration dives into representative methodologies, encapsulating the diverse research directions and inherent challenges within each category to provide a consolidated perspective. Lastly, we venture beyond established boundaries to outline emerging frontiers, offering a detailed insight into prospective research avenues.
Toxicity in bug report discussions poses significant challenges to the collaborative dynamics of open-source software development. Bug reports are crucial for identifying and resolving defects, yet their inherently problem-focused nature and emotionally charged context make them susceptible to toxic interactions. This study explores toxicity in GitHub bug reports through a qualitative analysis of 203 bug threads, including 81 toxic ones. Our findings reveal that toxicity frequently arises from misaligned perceptions of bug severity and priority, unresolved frustrations with tools, and lapses in professional communication. These toxic interactions not only derail productive discussions but also reduce the likelihood of actionable outcomes, such as linking issues with pull requests. Our preliminary findings offer actionable recommendations to improve bug resolution by mitigating toxicity.
Ensuring robustness in image watermarking is crucial for and maintaining content integrity under diverse transformations. Recent self-supervised learning (SSL) approaches, such as DINO, have been leveraged for watermarking but primarily focus on general feature representation rather than explicitly learning invariant features. In this work, we propose a novel text-guided invariant feature learning framework for robust image watermarking. Our approach leverages CLIP's multimodal capabilities, using text embeddings as stable semantic anchors to enforce feature invariance under distortions. We evaluate the proposed method across multiple datasets, demonstrating superior robustness against various image transformations. Compared to state-of-the-art SSL methods, our model achieves higher cosine similarity in feature consistency tests and outperforms existing watermarking schemes in extraction accuracy under severe distortions. These results highlight the efficacy of our method in learning invariant representations tailored for robust deep learning-based watermarking.
Watermarking the outputs of large language models (LLMs) is critical for provenance tracing, content regulation, and model accountability. Existing approaches often rely on access to model internals or are constrained by static rules and token-level perturbations. Moreover, the idea of steering generative behavior via prompt-based instruction control remains largely underexplored. We introduce a prompt-guided watermarking framework that operates entirely at the input level and requires no access to model parameters or decoding logits. The framework comprises three cooperating components: a Prompting LM that synthesizes watermarking instructions from user prompts, a Marking LM that generates watermarked outputs conditioned on these instructions, and a Detecting LM trained to classify whether a response carries an embedded watermark. This modular design enables dynamic watermarking that adapts to individual prompts while remaining compatible with diverse LLM architectures, including both proprietary and open-weight models. We evaluate the framework over 25 combinations of Prompting and Marking LMs, such as GPT-4o, Mistral, LLaMA3, and DeepSeek. Experimental results show that watermark signals generalize across architectures and remain robust under fine-tuning, model distillation, and prompt-based adversarial attacks, demonstrating the effectiveness and robustness of the proposed approach.
The increasing digitization of agricultural operations has introduced new cybersecurity challenges for the farming community. This paper introduces an educational intervention called Cybersecurity Improvement Initiative for Agriculture (CIIA), which aims to strengthen cybersecurity awareness and resilience among farmers and food producers. Using a case study that focuses on farmers from the Ponca Tribe of Nebraska, the research evaluates pre- and post- intervention survey data to assess participants' cybersecurity knowledge and awareness before and after exposure to the CIIA. The findings reveal a substantial baseline deficiency in cybersecurity education among participants, however, post-intervention assessments demonstrate improvements in the comprehension of cybersecurity concepts, such as password hygiene, multi-factor authentication, and the necessity of routine data backups. These initial findings highlight the need for a continued and sustained, community-specific cybersecurity education effort to help mitigate emerging cyber threats in the agricultural sector.
Accurate estimation of aircraft operations, such as takeoffs and landings, is critical for effective airport management, yet remains challenging, especially at non-towered facilities lacking dedicated surveillance infrastructure. This paper presents a novel dual pipeline machine learning framework that classifies pilot radio communications using both textual and spectral features. Audio data collected from a non-towered U.S. airport was annotated by certified pilots with operational intent labels and preprocessed through automatic speech recognition and Mel-spectrogram extraction. We evaluate a wide range of traditional classifiers and deep learning models, including ensemble methods, LSTM, and CNN across both pipelines. To our knowledge, this is the first system to classify operational aircraft intent using a dual-pipeline ML framework on real-world air traffic audio. Our results demonstrate that spectral features combined with deep architectures consistently yield superior classification performance, with F1-scores exceeding 91%. Data augmentation further improves robustness to real-world audio variability. The proposed approach is scalable, cost-effective, and deployable without additional infrastructure, offering a practical solution for air traffic monitoring at general aviation airports.
Secure quantum networks are a bedrock requirement for developing a future quantum internet. However, quantum channels are susceptible to channel noise that introduce errors in the transmitted data. The traditional approach to providing error correction typically encapsulates the message in an error correction code after encryption. Such separate processes incur overhead that must be avoided when possible. We, consequently, provide a single integrated process that allows for encryption as well as error correction. This is a first attempt to do so for secure quantum communication and combines the Calderbank-Shor-Steane (CSS) code with the three-stage secure quantum communication protocol. Lastly, it allows for arbitrary qubits to be transmitted from sender to receiver making the proposed protocol general purpose.
Answer Set Programming (ASP), a modern development of Logic Programming, enables a natural integration of Computing with STEM subjects. This integration addresses a widely acknowledged challenge in K-12 education, and early empirical results on ASP-based integration are promising. Although ASP is considered a simple language when compared with imperative programming languages, programming errors can still be a significant barrier for students. This is particularly true for K-12 students who are novice users of ASP. Categorizing errors and measuring their difficulty has yielded insights into imperative languages like Java. However, little is known about the types and difficulty of errors encountered by K-12 students using ASP. To address this, we collected high school student programs submitted during a 4-session seminar teaching an ASP language known as SPARC. From error messages in this dataset, we identify a collection of error classes, and measure how frequently each class occurs and how difficult it is to resolve.
Accurate estimation of aircraft operations, such as takeoffs and landings, is critical for effective airport management, yet remains challenging, especially at non-towered facilities lacking dedicated surveillance infrastructure. This paper presents a novel dual pipeline machine learning framework that classifies pilot radio communications using both textual and spectral features. Audio data collected from a non-towered U.S. airport was annotated by certified pilots with operational intent labels and preprocessed through automatic speech recognition and Mel-spectrogram extraction. We evaluate a wide range of traditional classifiers and deep learning models, including ensemble methods, LSTM, and CNN across both pipelines. To our knowledge, this is the first system to classify operational aircraft intent using a dual-pipeline ML framework on real-world air traffic audio. Our results demonstrate that spectral features combined with deep architectures consistently yield superior classification performance, with F1-scores exceeding 91%. Data augmentation further improves robustness to real-world audio variability. The proposed approach is scalable, cost-effective, and deployable without additional infrastructure, offering a practical solution for air traffic monitoring at general aviation airports.
The rapid advancement of Large Language Models (LLMs) has significantly enhanced the capabilities of text generators. With the potential for misuse escalating, the importance of discerning whether texts are human-authored or generated by LLMs has become paramount. Several preceding studies have ventured to address this challenge by employing binary classifiers to differentiate between human-written and LLM-generated text. Nevertheless, the reliability of these classifiers has been subject to question. Given that consequential decisions may hinge on the outcome of such classification, it is imperative that text source detection is of high caliber. In light of this, the present paper introduces DeepTextMark, a deep learning-driven text watermarking methodology devised for text source identification. By leveraging Word2Vec and Sentence Encoding for watermark insertion, alongside a transformer-based classifier for watermark detection, DeepTextMark epitomizes a blend of blindness, robustness, imperceptibility, and reliability. As elaborated within the paper, these attributes are crucial for universal text source detection, with a particular emphasis in this paper on text produced by LLMs. DeepTextMark offers a viable "add-on" solution to prevailing text generation frameworks, requiring no direct access or alterations to the underlying text generation mechanism. Experimental evaluations underscore the high imperceptibility, elevated detection accuracy, augmented robustness, reliability, and swift execution of DeepTextMark.
Autonomous vehicles have been rapidly developed as demand that provides safety and efficiency in transportation systems. As autonomous vehicles are designed based on open-source operating and computing systems, there are numerous resources aimed at building an operating platform composed of Ubuntu, Autoware, and Robot Operating System (ROS). However, no explicit guidelines exist to help scholars perform trouble-shooting due to incompatibility between the Autoware platform and Ubuntu operating systems installed in autonomous driving systems-equipped vehicles (i.e., Chrysler Pacifica). The paper presents an overview of integrating the Autoware platform into the autonomous vehicle's interface based on lessons learned from trouble-shooting processes for resolving incompatible issues. The trouble-shooting processes are presented based on resolving the incompatibility and integration issues of Ubuntu 20.04, Autoware.AI, and ROS Noetic software installed in an autonomous driving systems-equipped vehicle. Specifically, the paper focused on common incompatibility issues and code-solving protocols involving Python compatibility, Compute Unified Device Architecture (CUDA) installation, Autoware installation, and simulation in Autoware.AI. The objective of the paper is to provide an explicit and detail-oriented presentation to showcase how to address incompatibility issues among an autonomous vehicle's operating interference. The lessons and experience presented in the paper will be useful for researchers who encountered similar issues and could follow up by performing trouble-shooting activities and implementing ADS-related projects in the Ubuntu, Autoware, and ROS operating systems.
Incorporating geometric transformations that reflect the relative position changes between an observer and an object into computer vision and deep learning models has attracted much attention in recent years. However, the existing proposals mainly focus on the affine transformation that is insufficient to reflect such geometric position changes. Furthermore, current solutions often apply a neural network module to learn a single transformation matrix, which not only ignores the importance of multi-view analysis but also includes extra training parameters from the module apart from the transformation matrix parameters that increase the model complexity. In this paper, a perspective transformation layer is proposed in the context of deep learning. The proposed layer can learn homography, therefore reflecting the geometric positions between observers and objects. In addition, by directly training its transformation matrices, a single proposed layer can learn an adjustable number of multiple viewpoints without considering module parameters. The experiments and evaluations confirm the superiority of the proposed layer.
5
We introduce the task of mixed-view panorama synthesis, where the goal is to synthesize a novel panorama given a small set of input panoramas and a satellite image of the area. This contrasts with previous work which only uses input panoramas (same-view synthesis), or an input satellite image (cross-view synthesis). We argue that the mixed-view setting is the most natural to support panorama synthesis for arbitrary locations worldwide. A critical challenge is that the spatial coverage of panoramas is uneven, with few panoramas available in many regions of the world. We introduce an approach that utilizes diffusion-based modeling and an attention-based architecture for extracting information from all available input imagery. Experimental results demonstrate the effectiveness of our proposed method. In particular, our model can handle scenarios when the available panoramas are sparse or far from the location of the panorama we are attempting to synthesize. The project page is available at this https URL
The transportation industry, particularly the trucking sector, is prone to workplace accidents and fatalities. Accidents involving large trucks accounted for a considerable percentage of overall traffic fatalities. Recognizing the crucial role of safety climate in accident prevention, researchers have sought to understand its factors and measure its impact within organizations. While existing data-driven safety climate studies have made remarkable progress, clustering employees based on their safety climate perception is innovative and has not been extensively utilized in research. Identifying clusters of drivers based on their safety climate perception allows the organization to profile its workforce and devise more impactful interventions. The lack of utilizing the clustering approach could be due to difficulties interpreting or explaining the factors influencing employees' cluster membership. Moreover, existing safety-related studies did not compare multiple clustering algorithms, resulting in potential bias. To address these issues, this study introduces an interpretable clustering approach for safety climate analysis. This study compares 5 algorithms for clustering truck drivers based on their safety climate perceptions. It proposes a novel method for quantitatively evaluating partial dependence plots (QPDP). To better interpret the clustering results, this study introduces different interpretable machine learning measures (SHAP, PFI, and QPDP). Drawing on data collected from more than 7,000 American truck drivers, this study significantly contributes to the scientific literature. It highlights the critical role of supervisory care promotion in distinguishing various driver groups. The Python code is available at this https URL.
In recent times, advances in artificial intelligence (AI) and IoT have enabled seamless and viable maintenance of appliances in home and building environments. Several studies have shown that AI has the potential to provide personalized customer support which could predict and avoid errors more reliably than ever before. In this paper, we have analyzed the various building blocks needed to enable a successful AI-driven predictive maintenance use-case. Unlike, existing surveys which mostly provide a deep dive into the recent AI algorithms for Predictive Maintenance (PdM), our survey provides the complete view; starting from business impact to recent technology advancements in algorithms as well as systems research and model deployment. Furthermore, we provide exemplar use-cases on predictive maintenance of appliances using publicly available data sets. Our survey can serve as a template needed to design a successful predictive maintenance use-case. Finally, we touch upon existing public data sources and provide a step-wise breakdown of an AI-driven proactive customer care (PCC) use-case, starting from generic anomaly detection to fault prediction and finally root-cause analysis. We highlight how such a step-wise approach can be advantageous for accurate model building and helpful for gaining insights into predictive maintenance of electromechanical appliances.
There are no more papers matching your filters at the moment.