East West University
This paper evaluates the visualization literacy of modern Large Language Models (LLMs) and introduces a novel prompting technique called Charts-of-Thought. We tested three state-of-the-art LLMs (Claude-3.7-sonnet, GPT-4.5 preview, and Gemini-2.0-pro) on the Visualization Literacy Assessment Test (VLAT) using standard prompts and our structured approach. The Charts-of-Thought method guides LLMs through a systematic data extraction, verification, and analysis process before answering visualization questions. Our results show Claude-3.7-sonnet achieved a score of 50.17 using this method, far exceeding the human baseline of 28.82. This approach improved performance across all models, with score increases of 21.8% for GPT-4.5, 9.4% for Gemini-2.0, and 13.5% for Claude-3.7 compared to standard prompting. The performance gains were consistent across original and modified VLAT charts, with Claude correctly answering 100% of questions for several chart types that previously challenged LLMs. Our study reveals that modern multimodal LLMs can surpass human performance on visualization literacy tasks when given the proper analytical framework. These findings establish a new benchmark for LLM visualization literacy and demonstrate the importance of structured prompting strategies for complex visual interpretation tasks. Beyond improving LLM visualization literacy, Charts-of-Thought could also enhance the accessibility of visualizations, potentially benefiting individuals with visual impairments or lower visualization literacy.
The rapid growth of digital data has heightened the demand for efficient lossless compression methods. However, existing algorithms exhibit trade-offs: some achieve high compression ratios, others excel in encoding or decoding speed, and none consistently perform best across all dimensions. This mismatch complicates algorithm selection for applications where multiple performance metrics are simultaneously critical, such as medical imaging, which requires both compact storage and fast retrieval. To address this challenge, we present a mathematical framework that integrates compression ratio, encoding time, and decoding time into a unified performance score. The model normalizes and balances these metrics through a principled weighting scheme, enabling objective and fair comparisons among diverse algorithms. Extensive experiments on image and text datasets validate the approach, showing that it reliably identifies the most suitable compressor for different priority settings. Results also reveal that while modern learning-based codecs often provide superior compression ratios, classical algorithms remain advantageous when speed is paramount. The proposed framework offers a robust and adaptable decision-support tool for selecting optimal lossless data compression techniques, bridging theoretical measures with practical application needs.
Music can evoke various emotions, and with the advancement of technology, it has become more accessible to people. Bangla music, which portrays different human emotions, lacks sufficient research. The authors of this article aim to analyze Bangla songs and classify their moods based on the lyrics. To achieve this, this research has compiled a dataset of 4000 Bangla song lyrics, genres, and used Natural Language Processing and the Bert Algorithm to analyze the data. Among the 4000 songs, 1513 songs are represented for the sad mood, 1362 for the romantic mood, 886 for happiness, and the rest 239 are classified as relaxation. By embedding the lyrics of the songs, the authors have classified the songs into four moods: Happy, Sad, Romantic, and Relaxed. This research is crucial as it enables a multi-class classification of songs' moods, making the music more relatable to people's emotions. The article presents the automated result of the four moods accurately derived from the song lyrics.
The performance of convolutional neural networks (CNN) depends heavily on their architectures. Transfer learning performance of a CNN relies quite strongly on selection of its trainable layers. Selecting the most effective update layers for a certain target dataset often requires expert knowledge on CNN architecture which many practitioners do not posses. General users prefer to use an available architecture (e.g. GoogleNet, ResNet, EfficientNet etc.) that is developed by domain experts. With the ever-growing number of layers, it is increasingly becoming quite difficult and cumbersome to handpick the update layers. Therefore, in this paper we explore the application of genetic algorithm to mitigate this problem. The convolutional layers of popular pretrained networks are often grouped into modules that constitute their building blocks. We devise a genetic algorithm to select blocks of layers for updating the parameters. By experimenting with EfficientNetB0 pre-trained on ImageNet and using Food-101, CIFAR-100 and MangoLeafBD as target datasets, we show that our algorithm yields similar or better results than the baseline in terms of accuracy, and requires lower training and evaluation time due to learning less number of parameters. We also devise a metric called block importance to measure efficacy of each block as update block and analyze the importance of the blocks selected by our algorithm.
Liver diseases are a serious health concern in the world, which requires precise and timely diagnosis to enhance the survival chances of patients. The current literature implemented numerous machine learning and deep learning models to classify liver diseases, but most of them had some issues like high misclassification error, poor interpretability, prohibitive computational expense, and lack of good preprocessing strategies. In order to address these drawbacks, we introduced StackLiverNet in this study; an interpretable stacked ensemble model tailored to the liver disease detection task. The framework uses advanced data preprocessing and feature selection technique to increase model robustness and predictive ability. Random undersampling is performed to deal with class imbalance and make the training balanced. StackLiverNet is an ensemble of several hyperparameter-optimized base classifiers, whose complementary advantages are used through a LightGBM meta-model. The provided model demonstrates excellent performance, with the testing accuracy of 99.89%, Cohen Kappa of 0.9974, and AUC of 0.9993, having only 5 misclassifications, and efficient training and inference speeds that are amenable to clinical practice (training time 4.2783 seconds, inference time 0.1106 seconds). Besides, Local Interpretable Model-Agnostic Explanations (LIME) are applied to generate transparent explanations of individual predictions, revealing high concentrations of Alkaline Phosphatase and moderate SGOT as important observations of liver disease. Also, SHAP was used to rank features by their global contribution to predictions, while the Morris method confirmed the most influential features through sensitivity analysis.
Chronic Kidney Disease (CKD) is a major global health issue which is affecting million people around the world and with increasing rate of mortality. Mitigation of progression of CKD and better patient outcomes requires early detection. Nevertheless, limitations lie in traditional diagnostic methods, especially in resource constrained settings. This study proposes an advanced machine learning approach to enhance CKD detection by evaluating four models: Random Forest (RF), Multi-Layer Perceptron (MLP), Logistic Regression (LR), and a fine-tuned CatBoost algorithm. Specifically, among these, the fine-tuned CatBoost model demonstrated the best overall performance having an accuracy of 98.75%, an AUC of 0.9993 and a Kappa score of 97.35% of the studies. The proposed CatBoost model has used a nature inspired algorithm such as Simulated Annealing to select the most important features, Cuckoo Search to adjust outliers and grid search to fine tune its settings in such a way to achieve improved prediction accuracy. Features significance is explained by SHAP-a well-known XAI technique-for gaining transparency in the decision-making process of proposed model and bring up trust in diagnostic systems. Using SHAP, the significant clinical features were identified as specific gravity, serum creatinine, albumin, hemoglobin, and diabetes mellitus. The potential of advanced machine learning techniques in CKD detection is shown in this research, particularly for low income and middle-income healthcare settings where prompt and correct diagnoses are vital. This study seeks to provide a highly accurate, interpretable, and efficient diagnostic tool to add to efforts for early intervention and improved healthcare outcomes for all CKD patients.
Hate speech has spread more rapidly through the daily use of technology and, most notably, by sharing your opinions or feelings on social media in a negative aspect. Although numerous works have been carried out in detecting hate speeches in English, German, and other languages, very few works have been carried out in the context of the Bengali language. In contrast, millions of people communicate on social media in Bengali. The few existing works that have been carried out need improvements in both accuracy and interpretability. This article proposed encoder decoder based machine learning model, a popular tool in NLP, to classify user's Bengali comments on Facebook pages. A dataset of 7,425 Bengali comments, consisting of seven distinct categories of hate speeches, was used to train and evaluate our model. For extracting and encoding local features from the comments, 1D convolutional layers were used. Finally, the attention mechanism, LSTM, and GRU based decoders have been used for predicting hate speech categories. Among the three encoder decoder algorithms, the attention-based decoder obtained the best accuracy (77%).
Recommendation system is such a platform that helps people to easily find out the things they need within a few seconds. It is implemented based on the preferences of similar users or items. In this digital era, the internet has provided us with huge opportunities to use a lot of open resources for our own needs. But there are too many resources on the internet from which finding the precise one is a difficult job. Recommendation system has made this easier for people. Research-paper recommendation system is a system that is developed for people with common research interests using a collaborative filtering recommender system. In this paper, coauthor, keyword, reference, and common citation similarities are calculated using Jaccard Similarity to find the final similarity and to find the top-n similar users. Based on the test of top-n similar users of the target user research paper recommendations have been made. Finally, the accuracy of our recommendation system has been calculated. An impressive result has been found using our proposed system.
One of the deadliest pandemics is now happening in the current world due to COVID-19. This contagious virus is spreading like wildfire around the whole world. To minimize the spreading of this virus, World Health Organization (WHO) has made protocols mandatory for wearing face masks and maintaining 6 feet physical distance. In this paper, we have developed a system that can detect the proper maintenance of that distance and people are properly using masks or not. We have used the customized attention-inceptionv3 model in this system for the identification of those two components. We have used two different datasets along with 10,800 images including both with and without Face Mask images. The training accuracy has been achieved 98% and validation accuracy 99.5%. The system can conduct a precision value of around 98.2% and the frame rate per second (FPS) was 25.0. So, with this system, we can identify high-risk areas with the highest possibility of the virus spreading zone. This may help authorities to take necessary steps to locate those risky areas and alert the local people to ensure proper precautions in no time.
Semi-transparent photovoltaic devices for building integrated applications have the potential to provide simultaneous power generation and natural light penetration. CuIn1x_{1-x}Gax_xSe2_2 (CIGS) has been established as a mature technology for thin-film photovoltaics, however, its potential for Semi-Transparent Photovoltaics (STPV) is yet to be explored. In this paper, we present its carrier transport physics explaining the trend seen in recently published experiments. STPV requires deposition of films of only a few hundred nanometers to make them transparent and manifests several unique properties compared to a conventional thin-film solar cell. Our analysis shows that the short-circuit current, Jsc is dominated by carriers generated in the depletion region, making it nearly independent of bulk and back-surface recombination. The bulk recombination, which limits the open-circuit voltage Voc, appears to be higher than usual attributable to numerous grain boundaries. When the absorber layer is reduced below 500 nm, grain size reduces resulting in more grain boundaries and higher resistance. This produces an inverse relationship between series resistance and absorber thickness. We also present a thickness-dependent model of shunt resistance showing its impact in these ultra-thin devices. For various scenarios of bulk and interface recombinations, shunt and series resistances, AVT and composition of CuIn1x_{1-x}Gax_xSe2_2, we project the efficiency limit which - for most practical cases - is found to be \leq10% for AVT \geq25%.
Agriculture is of one of the few remaining sectors that is yet to receive proper attention from the machine learning community. The importance of datasets in the machine learning discipline cannot be overemphasized. The lack of standard and publicly available datasets related to agriculture impedes practitioners of this discipline to harness the full benefit of these powerful computational predictive tools and techniques. To improve this scenario, we develop, to the best of our knowledge, the first-ever standard, ready-to-use, and publicly available dataset of mango leaves. The images are collected from four mango orchards of Bangladesh, one of the top mango-growing countries of the world. The dataset contains 4000 images of about 1800 distinct leaves covering seven diseases. Although the dataset is developed using mango leaves of Bangladesh only, since we deal with diseases that are common across many countries, this dataset is likely to be applicable to identify mango diseases in other countries as well, thereby boosting mango yield. This dataset is expected to draw wide attention from machine learning researchers and practitioners in the field of automated agriculture.
Federated learning (FL) as distributed machine learning has gained popularity as privacy-aware Machine Learning (ML) systems have emerged as a technique that prevents privacy leakage by building a global model and by conducting individualized training of decentralized edge clients on their own private data. The existing works, however, employ privacy mechanisms such as Secure Multiparty Computing (SMC), Differential Privacy (DP), etc. Which are immensely susceptible to interference, massive computational overhead, low accuracy, etc. With the increasingly broad deployment of FL systems, it is challenging to ensure fairness and maintain active client participation in FL systems. Very few works ensure reasonably satisfactory performances for the numerous diverse clients and fail to prevent potential bias against particular demographics in FL systems. The current efforts fail to strike a compromise between privacy, fairness, and model performance in FL systems and are vulnerable to a number of additional problems. In this paper, we provide a comprehensive survey stating the basic concepts of FL, the existing privacy challenges, techniques, and relevant works concerning privacy in FL. We also provide an extensive overview of the increasing fairness challenges, existing fairness notions, and the limited works that attempt both privacy and fairness in FL. By comprehensively describing the existing FL systems, we present the potential future directions pertaining to the challenges of privacy-preserving and fairness-aware FL systems.
5
This study uses machine vision and drone technologies to propose a unique method for the diagnosis of cucumber disease in agriculture. The backbone of this research is a painstakingly curated dataset of hyperspectral photographs acquired under genuine field conditions. Unlike earlier datasets, this study included a wide variety of illness types, allowing for precise early-stage detection. The model achieves an excellent 87.5\% accuracy in distinguishing eight unique cucumber illnesses after considerable data augmentation. The incorporation of drone technology for high-resolution images improves disease evaluation. This development has enormous potential for improving crop management, lowering labor costs, and increasing agricultural productivity. This research, which automates disease detection, represents a significant step toward a more efficient and sustainable agricultural future.
Monolayer MoS2, MoSe2, MoTe2, WS2, WSe2, and black phosphorous field effect transistors (FETs) operating in the low-voltage (LV) regime (0.3V) with geometries from the 2019 and 2028 nodes of the 2013 International Technology Roadmap for Semiconductors (ITRS) are benchmarked along with an ultra-thin-body Si FET. Current can increase or decrease with scaling, and the trend is strongly correlated with the effective mass. For LV operation at the 2028 node, an effective mass of ~0.4 m0, corresponding to that of WSe2, gives the maximum drive current. The short 6 nm gate length combined with LV operation is forgiving in its requirements for material quality and contact resistances. In this LV regime, device and circuit performance are competitive using currently measured values for mobilities and contact resistances for the monolayer two-dimensional materials.
Agriculture activity monitoring needs to deal with large amounts of data originating from various organizations (weather stations, agriculture repositories, field management, farm management, universities, etc.) and mass people. Therefore, a scalable environment with flexible information access, easy communication, and real-time collaboration from all types of computing devices, including mobile handheld devices such as smartphones, PDAs and iPads, Geo-sensor devices, etc. are essential. The system must be accessible, scalable, and transparent from location, migration, and resources. In addition, the framework should support modern information retrieval and management systems, unstructured information to structured information processing, task prioritization, task distribution, workflow and task scheduling systems, processing power, and data storage. Thus, High Scalability Computing (HSC) or Cloud-based systems with Big data analytics can be a prominent and convincing solution for this circumstance. In this paper, we are going to propose an integrated (crop model, cloud, and big data analytics) geo-information framework to support agriculture activity monitoring systems.
Sentiment analysis (SA) is a process of identifying the emotional tone or polarity within a given text and aims to uncover the user's complex emotions and inner feelings. While sentiment analysis has been extensively studied for languages like English, research in Bengali, remains limited, particularly for fine-grained sentiment categorization. This work aims to connect this gap by developing a novel approach that integrates rule-based algorithms with pre-trained language models. We developed a dataset from scratch, comprising over 15,000 manually labeled reviews. Next, we constructed a Lexicon Data Dictionary, assigning polarity scores to the reviews. We developed a novel rule based algorithm Bangla Sentiment Polarity Score (BSPS), an approach capable of generating sentiment scores and classifying reviews into nine distinct sentiment categories. To assess the performance of this method, we evaluated the classified sentiments using BanglaBERT, a pre-trained transformer-based language model. We also performed sentiment classification directly with BanglaBERT on the original data and evaluated this model's results. Our analysis revealed that the BSPS + BanglaBERT hybrid approach outperformed the standalone BanglaBERT model, achieving higher accuracy, precision, and nuanced classification across the nine sentiment categories. The results of our study emphasize the value and effectiveness of combining rule-based and pre-trained language model approaches for enhanced sentiment analysis in Bengali and suggest pathways for future research and application in languages with similar linguistic complexities.
The emergence of Software-Defined Networking (SDN) has changed the network structure by separating the control plane from the data plane. However, this innovation has also increased susceptibility to DDoS attacks. Existing detection techniques are often ineffective due to data imbalance and accuracy issues; thus, a considerable research gap exists regarding DDoS detection methods suitable for SDN contexts. This research attempts to detect DDoS attacks more effectively using machine learning algorithms: RF, SVC, KNN, MLP, and XGB. For this purpose, both balanced and imbalanced datasets have been used to measure the performance of the models in terms of accuracy and AUC. Based on the analysis, we can say that RF and XGB had the perfect score, 1.0000, in the accuracy and AUC, but since XGB ended with the lowest Brier Score which indicates the highest reliability. MLP achieved an accuracy of 99.93%, SVC an accuracy of 97.65% and KNN an accuracy of 97.87%, which was the next best performers after RF and XGB. These results are consistent with the validity of SDNs as a platform for RF and XGB techniques in detecting DDoS attacks and highlights the importance of balanced datasets for improving detection against generative cyber attacks that are continually evolving.
This study explores perceptions of fairness in algorithmic decision-making among users in Bangladesh through a comprehensive mixed-methods approach. By integrating quantitative survey data with qualitative interview insights, we examine how cultural, social, and contextual factors influence users' understanding of fairness, transparency, and accountability in AI systems. Our findings reveal nuanced attitudes toward human oversight, explanation mechanisms, and contestability, highlighting the importance of culturally aware design principles for equitable and trustworthy algorithmic systems. These insights contribute to ongoing discussions on algorithmic fairness by foregrounding perspectives from a non-Western context, thus broadening the global dialogue on ethical AI deployment.
Class imbalance poses a major challenge in different classification tasks, which is a frequently occurring scenario in many real-world applications. Data resampling is considered to be the standard approach to address this issue. The goal of the technique is to balance the class distribution by generating new samples or eliminating samples from the data. A wide variety of sampling techniques have been proposed over the years to tackle this challenging problem. Sampling techniques can also be incorporated into the ensemble learning framework to obtain more generalized prediction performance. Balanced Random Forest (BRF) and SMOTE-Bagging are some of the popular ensemble approaches. In this study, we propose a modification to the BRF classifier to enhance the prediction performance. In the original algorithm, the Random Undersampling (RUS) technique was utilized to balance the bootstrap samples. However, randomly eliminating too many samples from the data leads to significant data loss, resulting in a major decline in performance. We propose to alleviate the scenario by incorporating a novel hybrid sampling approach to balance the uneven class distribution in each bootstrap sub-sample. Our proposed hybrid sampling technique, when incorporated into the framework of the Random Forest classifier, termed as iBRF: improved Balanced Random Forest classifier, achieves better prediction performance than other sampling techniques used in imbalanced classification tasks. Experiments were carried out on 44 imbalanced datasets on which the original BRF classifier produced an average MCC score of 47.03% and an F1 score of 49.09%. Our proposed algorithm outperformed the approach by producing a far better MCC score of 53.04% and an F1 score of 55%. The results obtained signify the superiority of the iBRF algorithm and its potential to be an effective sampling technique in imbalanced learning.
1
A Wireless Sensor Network (WSN) is a network that does not rely on a fixed infrastructure and consists of numerous sensors, such as temperature, humidity, GPS, and cameras, equipped with onboard processors that manage and monitor the environment in a specific area. As a result, building a real sensor network testbed for verifying, validating, or experimenting with a newly designed protocol presents considerable challenges in adapting a laboratory scenario due to the significant financial and logistical barriers, such as the need for specialized hardware and large-scale deployments. Additionally, WSN suffers from severe constraints such as restricted power supply, short communication range, limited bandwidth availability, and restricted memory storage. Addressing these challenges, this work presents a flexible testbed solution named STGen that enables researchers to experiment with IoT protocols in a hybrid environment that emulates WSN implementations with the physical Internet through a dedicated physical server named STGen core, which receives sensor traffic and processes it for further actions. The STGen testbed is lightweight in memory usage and easy to deploy. Most importantly, STGen supports large-scale distributed systems, facilitates experimentation with IoT protocols, and enables integration with back-end services for big data analytics and statistical insights. The key feature of STGen is the integration of real-world IoT protocols and their applications with WSN. Its modular and lightweight design makes STGen efficient and enables it to outperform other popular testbeds, such as Gotham and GothX, reducing memory usage by 89\%. While GothX takes approximately 26 minutes to establish a large topology with four VM nodes and 498 Docker nodes, STGen requires only 1.645 seconds to initialize the platform with 500 sensor nodes.
There are no more papers matching your filters at the moment.