Central Queensland University
PointDiffuse introduces a dual-conditional diffusion model for point cloud semantic segmentation that effectively leverages the denoising capabilities of diffusion models. This model, developed by a collaborative team including researchers from the University of Western Australia, achieves state-of-the-art mean IoU on several benchmarks (e.g., 81.2% on S3DIS 6-fold) while demonstrating enhanced robustness to noise and improved computational efficiency.
The air transportation local share, defined as the proportion of local passengers relative to total passengers, serves as a critical metric reflecting how economic growth, carrier strategies, and market forces jointly influence demand composition. This metric is particularly useful for examining industry structure changes and large-scale disruptive events such as the COVID-19 pandemic. This research offers an in-depth analysis of local share patterns on more than 3900 Origin and Destination (O&D) pairs across the U.S. air transportation system, revealing how economic expansion, the emergence of low-cost carriers (LCCs), and strategic shifts by legacy carriers have collectively elevated local share. To efficiently identify the local share characteristics of thousands of O&Ds and to categorize the O&Ds that have the same behavior, a range of time series clustering methods were used. Evaluation using visualization, performance metrics, and case-based examination highlighted distinct patterns and trends, from magnitude-based stratification to trend-based groupings. The analysis also identified pattern commonalities within O&D pairs, suggesting that macro-level forces (e.g., economic cycles, changing demographics, or disruptions such as COVID-19) can synchronize changes between disparate markets. These insights set the stage for predictive modeling of local share, guiding airline network planning and infrastructure investments. This study combines quantitative analysis with flexible clustering to help stakeholders anticipate market shifts, optimize resource allocation strategies, and strengthen the air transportation system's resilience and competitiveness.
Outlier detection is a technique in data mining that aims to detect unusual or unexpected records in the dataset. Existing outlier detection algorithms have different pros and cons and exhibit different sensitivity to noisy data such as extreme values. In this paper, we propose a novel cluster-based outlier detection algorithm named MSD-Kmeans that combines the statistical method of Mean and Standard Deviation (MSD) and the machine learning clustering algorithm K-means to detect outliers more accurately with the better control of extreme values. There are two phases in this combination method of MSD-Kmeans: (1) applying MSD algorithm to eliminate as many noisy data to minimize the interference on clusters, and (2) applying K-means algorithm to obtain local optimal clusters. We evaluate our algorithm and demonstrate its effectiveness in the context of detecting possible overcharging of taxi fares, as greedy dishonest drivers may attempt to charge high fares by detouring. We compare the performance indicators of MSD-Kmeans with those of other outlier detection algorithms, such as MSD, K-means, Z-score, MIQR and LOF, and prove that the proposed MSD-Kmeans algorithm achieves the highest measure of precision, accuracy, and F-measure. We conclude that MSD-Kmeans can be used for effective and efficient outlier detection on data of varying quality on IoT devices.
Anomaly detection for indoor air quality (IAQ) data has become an important area of research as the quality of air is closely related to human health and well-being. However, traditional statistics and shallow machine learning-based approaches in anomaly detection in the IAQ area could not detect anomalies involving the observation of correlations across several data points (i.e., often referred to as long-term dependences). We propose a hybrid deep learning model that combines LSTM with Autoencoder for anomaly detection tasks in IAQ to address this issue. In our approach, the LSTM network is comprised of multiple LSTM cells that work with each other to learn the long-term dependences of the data in a time-series sequence. Autoencoder identifies the optimal threshold based on the reconstruction loss rates evaluated on every data across all time-series sequences. Our experimental results, based on the Dunedin CO2 time-series dataset obtained through a real-world deployment of the schools in New Zealand, demonstrate a very high and robust accuracy rate (99.50%) that outperforms other similar models.
Machine learning algorithms have been widely used in intrusion detection systems, including Multi-layer Perceptron (MLP). In this study, we proposed a two-stage model that combines the Birch clustering algorithm and MLP classifier to improve the performance of network anomaly multi-classification. In our proposed method, we first apply Birch or Kmeans as an unsupervised clustering algorithm to the CICIDS-2017 dataset to pre-group the data. The generated pseudo-label is then added as an additional feature to the training of the MLP-based classifier. The experimental results show that using Birch and K-Means clustering for data pre-grouping can improve intrusion detection system performance. Our method can achieve 99.73% accuracy in multi-classification using Birch clustering, which is better than similar researches using a stand-alone MLP model.
The rapid adoption of generative artificial intelligence (GenAI) in research presents both opportunities and ethical challenges that should be carefully navigated. Although GenAI tools can enhance research efficiency through automation of tasks such as literature review and data analysis, their use raises concerns about aspects such as data accuracy, privacy, bias, and research integrity. This paper develops the ETHICAL framework, which is a practical guide for responsible GenAI use in research. Employing a constructivist case study examining multiple GenAI tools in real research contexts, the framework consists of seven key principles: Examine policies and guidelines, Think about social impacts, Harness understanding of the technology, Indicate use, Critically engage with outputs, Access secure versions, and Look at user agreements. Applying these principles will enable researchers to uphold research integrity while leveraging GenAI benefits. The framework addresses a critical gap between awareness of ethical issues and practical action steps, providing researchers with concrete guidance for ethical GenAI integration. This work has implications for research practice, institutional policy development, and the broader academic community while adapting to an AI-enhanced research landscape. The ETHICAL framework can serve as a foundation for developing AI literacy in academic settings and promoting responsible innovation in research methodologies.
Operations and Supply Chain Management (OSCM) has continually evolved, incorporating a broad array of strategies, frameworks, and technologies to address complex challenges across industries. This encyclopedic article provides a comprehensive overview of contemporary strategies, tools, methods, principles, and best practices that define the field's cutting-edge advancements. It also explores the diverse environments where OSCM principles have been effectively implemented. The article is meant to be read in a nonlinear fashion. It should be used as a point of reference or first-port-of-call for a diverse pool of readers: academics, researchers, students, and practitioners.
20 Feb 2023
This paper provides a systematic review of emerging control techniques used for railway Virtual Coupling (VC) studies. Train motion models are first reviewed, including model formulations and the force elements involved. Control objectives and typical design constraints are then elaborated. Next, the existing VC control techniques are surveyed and classified into five groups: consensus-based control, model prediction control, sliding mode control, machine learning-based control, and constraints-following control. Their advantages and disadvantages for VC applications are also discussed in detail. Furthermore, several future studies for achieving better controller development and implementation, respectively, are presented. The purposes of this survey are to help researchers to achieve a better systematic understanding regarding VC control, to spark more research into VC and to further speed-up the realization of this emerging technology in railway and other relevant fields such as road vehicles.
In recent years cybersecurity has become a major concern in adaptation of smart applications. Specially, in smart homes where a large number of IoT devices are used having a secure and trusted mechanisms can provide peace of mind for users. Accurate detection of cyber attacks is crucial, however precise identification of the type of attacks plays a huge role if devising the countermeasure for protecting the system. Artificial Neural Networks (ANN) have provided promising results for detecting any security attacks for smart applications. However, due to complex nature of the model used for this technique it is not easy for normal users to trust ANN based security solutions. Also, selection of right hyperparameters for ANN architecture plays a crucial role in the accurate detection of security attacks, especially when it come to identifying the subcategories of attacks. In this paper, we propose a model that considers both the issues of explainability of ANN model and the hyperparameter selection for this approach to be easily trusted and adapted by users of smart home applications. Also, our approach considers a subset of the dataset for optimal selection of hyperparamters to reduce the overhead of the process of ANN architecture design. Distinctively this paper focuses on configuration, performance and evaluation of ANN architecture for identification of five categorical attacks and nine subcategorical attacks. Using a very recent IoT dataset our approach showed high performance for intrusion detection with 99.9%, 99.7%, and 97.7% accuracy for Binary, Category, and Subcategory level classification of attacks.
The measurement of fetal thalamus diameter (FTD) and fetal head circumference (FHC) are crucial in identifying abnormal fetal thalamus development as it may lead to certain neuropsychiatric disorders in later life. However, manual measurements from 2D-US images are laborious, prone to high inter-observer variability, and complicated by the high signal-to-noise ratio nature of the images. Deep learning-based landmark detection approaches have shown promise in measuring biometrics from US images, but the current state-of-the-art (SOTA) algorithm, BiometryNet, is inadequate for FTD and FHC measurement due to its inability to account for the fuzzy edges of these structures and the complex shape of the FTD structure. To address these inadequacies, we propose a novel Swoosh Activation Function (SAF) designed to enhance the regularization of heatmaps produced by landmark detection algorithms. Our SAF serves as a regularization term to enforce an optimum mean squared error (MSE) level between predicted heatmaps, reducing the dispersiveness of hotspots in predicted heatmaps. Our experimental results demonstrate that SAF significantly improves the measurement performances of FTD and FHC with higher intraclass correlation coefficient scores in FTD and lower mean difference scores in FHC measurement than those of the current SOTA algorithm BiometryNet. Moreover, our proposed SAF is highly generalizable and architecture-agnostic. The SAF's coefficients can be configured for different tasks, making it highly customizable. Our study demonstrates that the SAF activation function is a novel method that can improve measurement accuracy in fetal biometry landmark detection. This improvement has the potential to contribute to better fetal monitoring and improved neonatal outcomes.
We address the issue of defining a semantics for deontic argumentation that supports weak permission. Some recent results show that grounded semantics do not support weak permission when there is a conflict between two obligations. We provide a definition of Deontic Argumentation Theory that accounts for weak permission, and we recall the result about grounded semantics. Then, we propose a new semantics that supports weak permission.
Point cloud processing methods leverage local and global point features %at the feature level to cater to downstream tasks, yet they often overlook the task-level context inherent in point clouds during the encoding stage. We argue that integrating task-level information into the encoding stage significantly enhances performance. To that end, we propose SMTransformer which incorporates task-level information into a vector-based transformer by utilizing a soft mask generated from task-level queries and keys to learn the attention weights. Additionally, to facilitate effective communication between features from the encoding and decoding layers in high-level tasks such as segmentation, we introduce a skip-attention-based up-sampling block. This block dynamically fuses features from various resolution points across the encoding and decoding layers. To mitigate the increase in network parameters and training time resulting from the complexity of the aforementioned blocks, we propose a novel shared position encoding strategy. This strategy allows various transformer blocks to share the same position information over the same resolution points, thereby reducing network parameters and training time without compromising accuracy.Experimental comparisons with existing methods on multiple datasets demonstrate the efficacy of SMTransformer and skip-attention-based up-sampling for point cloud processing tasks, including semantic segmentation and classification. In particular, we achieve state-of-the-art semantic segmentation results of 73.4% mIoU on S3DIS Area 5 and 62.4% mIoU on SWAN dataset
We present a novel approach to the automated semantic analysis of legal texts using large language models (LLMs), targeting their transformation into formal representations in Defeasible Deontic Logic (DDL). We propose a structured pipeline that segments complex normative language into atomic snippets, extracts deontic rules, and evaluates them for syntactic and semantic coherence. Our methodology is evaluated across various LLM configurations, including prompt engineering strategies, fine-tuned models, and multi-stage pipelines, focusing on legal norms from the Australian Telecommunications Consumer Protections Code. Empirical results demonstrate promising alignment between machine-generated and expert-crafted formalizations, showing that LLMs - particularly when prompted effectively - can significantly contribute to scalable legal informatics.
A Distributed Denial-of-service (DDoS) attack is a malicious attempt to disrupt the regular traffic of a targeted server, service, or network by sending a flood of traffic to overwhelm the target or its surrounding infrastructure. As technology improves, new attacks have been developed by hackers. Traditional statistical and shallow machine learning techniques can detect superficial anomalies based on shallow data and feature selection, however, these approaches cannot detect unseen DDoS attacks. In this context, we propose a reconstruction-based anomaly detection model named LSTM-Autoencoder (LSTM-AE) which combines two deep learning-based models for detecting DDoS attack anomalies. The proposed structure of long short-term memory (LSTM) networks provides units that work with each other to learn the long short-term correlation of data within a time series sequence. Autoencoders are used to identify the optimal threshold based on the reconstruction error rates evaluated on each sample across all time-series sequences. As such, a combination model LSTM-AE can not only learn delicate sub-pattern differences in attacks and benign traffic flows, but also minimize reconstructed benign traffic to obtain a lower range reconstruction error, with attacks presenting a larger reconstruction error. In this research, we trained and evaluated our proposed LSTM-AE model on reflection-based DDoS attacks (DNS, LDAP, and SNMP). The results of our experiments demonstrate that our method performs better than other state-of-the-art methods, especially for LDAP attacks, with an accuracy of over 99.
Vegetation segmentation from roadside data is a field that has received relatively little attention in present studies, but can be of great potentials in a wide range of real-world applications, such as road safety assessment and vegetation condition monitoring. In this paper, we present a novel approach that generates class-semantic color-texture textons and aggregates superpixel based texton occurrences for vegetation segmentation in natural roadside images. Pixel-level class-semantic textons are first learnt by generating two individual sets of bag-of-word visual dictionaries from color and filter-bank texture features separately for each object class using manually cropped training data. For a testing image, it is first oversegmented into a set of homogeneous superpixels. The color and texture features of all pixels in each superpixel are extracted and further mapped to one of the learnt textons using the nearest distance metric, resulting in a color and a texture texton occurrence matrix. The color and texture texton occurrences are aggregated using a linear mixing method over each superpixel and the segmentation is finally achieved using a simple yet effective majority voting strategy. Evaluations on two public image datasets from videos collected by the Department of Transport and Main Roads (DTMR), Queensland, Australia, and a public roadside grass dataset show high accuracy of the proposed approach. We also demonstrate the effectiveness of the approach for vegetation segmentation in real-world scenarios.
Metamorphic testing has become one mainstream technique to address the notorious oracle problem in software testing, thanks to its great successes in revealing real-life bugs in a wide variety of software systems. Metamorphic relations, the core component of metamorphic testing, have continuously attracted research interests from both academia and industry. In the last decade, a rapidly increasing number of studies have been conducted to systematically generate metamorphic relations from various sources and for different application domains. In this article, based on the systematic review on the state of the art for metamorphic relations' generation, we summarize and highlight visions for further advancing the theory and techniques for identifying and constructing metamorphic relations, and discuss potential research trends in related areas.
Graph theory has become a very critical component in many applications in the computing field including networking and security. Unfortunately, it is also amongst the most complex topics to understand and apply. In this paper, we review some of the key applications of graph theory in network security. We first cover some algorithmic aspects, then present network coding and its relation to routing.
Nowadays, the global booming of FinTech can be seen everywhere. FinTech has created innovative disruptions to traditional, long-established financial institutions (e.g., banks and insurance companies) in financial services markets. Despite of its popularity, there are many different definitions of FinTech. This problem occurs because many existing studies only focus on a particular aspect of FinTech without a comprehensive and in-depth analysis. This problem will hinder further development and industrial application of FinTech. In view of this problem, we perform a narrative review involving over 100 relevant studies or reports, with a view to developing a FinTech clustering framework for providing a more comprehensive and holistic view of FinTech. Furthermore, we use an Indian FinTech firm to illustrate how to apply our clustering framework for analysis.
Bushfire is one of the major natural disasters that cause huge losses to livelihoods and the environment. Understanding and analyzing the severity of bushfires is crucial for effective management and mitigation strategies, helping to prevent the extensive damage and loss caused by these natural disasters. This study presents an in-depth analysis of bushfire severity in Australia over the last twelve years, combining remote sensing data and machine learning techniques to predict future fire trends. By utilizing Landsat imagery and integrating spectral indices like NDVI, NBR, and Burn Index, along with topographical and climatic factors, we developed a robust predictive model using XGBoost. The model achieved high accuracy, 86.13%, demonstrating its effectiveness in predicting fire severity across diverse Australian ecosystems. By analyzing historical trends and integrating factors such as population density and vegetation cover, we identify areas at high risk of future severe bushfires. Additionally, this research identifies key regions at risk, providing data-driven recommendations for targeted firefighting efforts. The findings contribute valuable insights into fire management strategies, enhancing resilience to future fire events in Australia. Also, we propose future work on developing a UAV-based swarm coordination model to enhance fire prediction in real-time and firefighting capabilities in the most vulnerable regions.
The rapid advancement of Generative Artificial Intelligence (GenAI) has introduced new opportunities for transforming higher education, particularly in fields that require analytical reasoning and regulatory compliance, such as cybersecurity management. This study presents a structured framework for integrating GenAI tools into cybersecurity education, demonstrating their role in fostering critical thinking, real-world problem-solving, and regulatory awareness. The implementation strategy followed a two-stage approach, embedding GenAI within tutorial exercises and assessment tasks. Tutorials enabled students to generate, critique, and refine AI-assisted cybersecurity policies, while assessments required them to apply AI-generated outputs to real-world scenarios, ensuring alignment with industry standards and regulatory requirements. Findings indicate that AI-assisted learning significantly enhanced students' ability to evaluate security policies, refine risk assessments, and bridge theoretical knowledge with practical application. Student reflections and instructor observations revealed improvements in analytical engagement, yet challenges emerged regarding AI over-reliance, variability in AI literacy, and the contextual limitations of AI-generated content. Through structured intervention and research-driven refinement, students were able to recognize AI strengths as a generative tool while acknowledging its need for human oversight. This study further highlights the broader implications of AI adoption in cybersecurity education, emphasizing the necessity of balancing automation with expert judgment to cultivate industry-ready professionals. Future research should explore the long-term impact of AI-driven learning on cybersecurity competency, as well as the potential for adaptive AI-assisted assessments to further personalize and enhance educational outcomes.
There are no more papers matching your filters at the moment.