BITS Pilani K K Birla Goa Campus
In this paper, we explore the application of Permutation Decision Trees (PDT) and strategic trailing for predicting stock market movements and executing profitable trades in the Indian stock market. We focus on high-frequency data using 5-minute candlesticks for the top 50 stocks listed in the NIFTY 50 index and Forex pairs such as XAUUSD and EURUSD. We implement a trading strategy that aims to buy stocks at lower prices and sell them at higher prices, capitalizing on short-term market fluctuations. Due to regulatory constraints in India, short selling is not considered in our strategy. The model incorporates various technical indicators and employs hyperparameters such as the trailing stop-loss value and support thresholds to manage risk effectively. We trained and tested data on a 3 month dataset provided by Yahoo Finance. Our bot based on Permutation Decision Tree achieved a profit of 1.1802\% over the testing period, where as a bot based on LSTM gave a return of 0.557\% over the testing period and a bot based on RNN gave a return of 0.5896\% over the testing period. All of the bots outperform the buy-and-hold strategy, which resulted in a loss of 2.29\%.
Inferring causal relationships in the decision-making processes of machine learning algorithms is a crucial step toward achieving explainable Artificial Intelligence (AI). In this research, we introduce a novel causality measure and a distance metric derived from Lempel-Ziv (LZ) complexity. We explore how the proposed causality measure can be used in decision trees by enabling splits based on features that most strongly \textit{cause} the outcome. We further evaluate the effectiveness of the causality-based decision tree and the distance-based decision tree in comparison to a traditional decision tree using Gini impurity. While the proposed methods demonstrate comparable classification performance overall, the causality-based decision tree significantly outperforms both the distance-based decision tree and the Gini-based decision tree on datasets generated from causal models. This result indicates that the proposed approach can capture insights beyond those of classical decision trees, especially in causally structured data. Based on the features used in the LZ causal measure based decision tree, we introduce a causal strength for each features in the dataset so as to infer the predominant causal variables for the occurrence of the outcome.
Modern datasets often contain high-dimensional features exhibiting complex dependencies. To effectively analyze such data, dimensionality reduction methods rely on estimating the dataset's intrinsic dimension (id) as a measure of its underlying complexity. However, estimating id is challenging due to its dependence on scale: at very fine scales, noise inflates id estimates, while at coarser scales, estimates stabilize to lower, scale-invariant values. This paper introduces a novel, scalable, and parallelizable method called eDCF, which is based on Connectivity Factor (CF), a local connectivity-based metric, to robustly estimate intrinsic dimension across varying scales. Our method consistently matches leading estimators, achieving comparable values of mean absolute error (MAE) on synthetic benchmarks with noisy samples. Moreover, our approach also attains higher exact intrinsic dimension match rates, reaching up to 25.0% compared to 16.7% for MLE and 12.5% for TWO-NN, particularly excelling under medium to high noise levels and large datasets. Further, we showcase our method's ability to accurately detect fractal geometries in decision boundaries, confirming its utility for analyzing realistic, structured data.
Large Language Models (LLMs) have demonstrated remarkable capabilities in understanding and generating human-like content. This has revolutionized various sectors such as healthcare, software development, and education. In education, LLMs offer potential for personalized and interactive learning experiences, especially in regions with limited teaching resources. However, adapting these models effectively to curriculum-specific content, such as the National Council of Educational Research and Training (NCERT) syllabus in India, presents unique challenges in terms of accuracy, alignment, and pedagogical relevance. In this paper, we present the framework "PustakAI"\footnote{Pustak means `book' in many Indian languages.} for the design and evaluation of a novel question-answering dataset "NCERT-QA" aligned with the NCERT curriculum for English and Science subjects of grades 6 to 8. We classify the curated QA pairs as Factoid, Inferential, and Others (evaluative and reasoning). We evaluate the dataset with various prompting techniques, such as meta-prompt, few-shot, and CoT-style prompting, using diverse evaluation metrics to understand which approach aligns more efficiently with the structure and demands of the curriculum. Along with the usability of the dataset, we analyze the strengths and limitations of current open-source LLMs (Gemma3:1b, Llama3.2:3b, and Nemotron-mini:4b) and high-end LLMs (Llama-4-Scout-17B and Deepseek-r1-70B) as AI-based learning tools in formal education systems.
Distributed Machine Learning (DML) on resource-constrained edge devices holds immense potential for real-world applications. However, achieving fast convergence in DML in these heterogeneous environments remains a significant challenge. Traditional frameworks like Bulk Synchronous Parallel and Asynchronous Stochastic Parallel rely on frequent, small updates that incur substantial communication overhead and hinder convergence speed. Furthermore, these frameworks often employ static dataset sizes, neglecting the heterogeneity of edge devices and potentially leading to straggler nodes that slow down the entire training process. The straggler nodes, i.e., edge devices that take significantly longer to process their assigned data chunk, hinder the overall training speed. To address these limitations, this paper proposes Hermes, a novel probabilistic framework for efficient DML on edge devices. This framework leverages a dynamic threshold based on recent test loss behavior to identify statistically significant improvements in the model's generalization capability, hence transmitting updates only when major improvements are detected, thereby significantly reducing communication overhead. Additionally, Hermes employs dynamic dataset allocation to optimize resource utilization and prevents performance degradation caused by straggler nodes. Our evaluations on a real-world heterogeneous resource-constrained environment demonstrate that Hermes achieves faster convergence compared to state-of-the-art methods, resulting in a remarkable 13.2213.22x reduction in training time and a 62.1%62.1\% decrease in communication overhead.
3
Urban mobility faces persistent challenges of congestion and fuel consumption, specifically when people choose a private, point-to-point commute option. Profit-driven ride-sharing platforms prioritize revenue over fairness and sustainability. This paper introduces Altruistic Ride-Sharing (ARS), a decentralized, peer-to-peer mobility framework where participants alternate between driver and rider roles based on altruism points rather than monetary incentives. The system integrates multi-agent reinforcement learning (MADDPG) for dynamic ride-matching, game-theoretic equilibrium guarantees for fairness, and a population model to sustain long-term balance. Using real-world New York City taxi data, we demonstrate that ARS reduces travel distance and emissions, increases vehicle utilization, and promotes equitable participation compared to both no-sharing and optimization-based baselines. These results establish ARS as a scalable, community-driven alternative to conventional ride-sharing, aligning individual behavior with collective urban sustainability goals.
Delivery of items from the producer to the consumer has experienced significant growth over the past decade and has been greatly fueled by the recent pandemic. Amazon Fresh, Shopify, UberEats, InstaCart, and DoorDash are rapidly growing and are sharing the same business model of consumer items or food delivery. Existing food delivery methods are sub-optimal because each delivery is individually optimized to go directly from the producer to the consumer via the shortest time path. We observe a significant scope for reducing the costs associated with completing deliveries under the current model. We model our food delivery problem as a multi-objective optimization, where consumer satisfaction and delivery costs, both, need to be optimized. Taking inspiration from the success of ride-sharing in the taxi industry, we propose DeliverAI - a reinforcement learning-based path-sharing algorithm. Unlike previous attempts for path-sharing, DeliverAI can provide real-time, time-efficient decision-making using a Reinforcement learning-enabled agent system. Our novel agent interaction scheme leverages path-sharing among deliveries to reduce the total distance traveled while keeping the delivery completion time under check. We generate and test our methodology vigorously on a simulation setup using real data from the city of Chicago. Our results show that DeliverAI can reduce the delivery fleet size by 12\%, the distance traveled by 13%, and achieve 50% higher fleet utilization compared to the baselines.
We employ both random forests and LSTM networks (more precisely CuDNNLSTM) as training methodologies to analyze their effectiveness in forecasting out-of-sample directional movements of constituent stocks of the S&P 500 from January 1993 till December 2018 for intraday trading. We introduce a multi-feature setting consisting not only of the returns with respect to the closing prices, but also with respect to the opening prices and intraday returns. As trading strategy, we use Krauss et al. (2017) and Fischer & Krauss (2018) as benchmark. On each trading day, we buy the 10 stocks with the highest probability and sell short the 10 stocks with the lowest probability to outperform the market in terms of intraday returns -- all with equal monetary weight. Our empirical results show that the multi-feature setting provides a daily return, prior to transaction costs, of 0.64% using LSTM networks, and 0.54% using random forests. Hence we outperform the single-feature setting in Fischer & Krauss (2018) and Krauss et al. (2017) consisting only of the daily returns with respect to the closing prices, having corresponding daily returns of 0.41% and of 0.39% with respect to LSTM and random forests, respectively.
448
Towards the goal of quantum computing for lattice quantum chromodynamics, we present a loop-string-hadron (LSH) framework in 1+1 dimensions for describing the dynamics of SU(3) gauge fields coupled to staggered fermions. This novel framework was previously developed for an SU(2) lattice gauge theory in d3d\leq3 spatial dimensions and its advantages for classical and quantum algorithms have thus far been demonstrated in d=1d=1. The LSH approach uses gauge invariant degrees of freedoms such as loop segments, string ends, and on-site hadrons, it is free of all nonabelian gauge redundancy, and it is described by a Hamiltonian containing only local interactions. In this work, the SU(3) LSH framework is systematically derived from the reformulation of Hamiltonian lattice gauge theory in terms of irreducible Schwinger bosons, including the addition of staggered quarks. Furthermore, the superselection rules governing the LSH dynamics are identified directly from the form of the Hamiltonian. The SU(3) LSH Hamiltonian with open boundary conditions has been numerically confirmed to agree with the completely gauge-fixed Hamiltonian, which contains long-range interactions and does not generalize to either periodic boundary conditions or to d>1d>1.
Parking space occupation detection using deep learning frameworks has seen significant advancements over the past few years. While these approaches effectively detect partial obstructions and adapt to varying lighting conditions, their performance significantly diminishes when haze is present. This paper proposes a novel hybrid model with a pre-trained feature extractor and a Pinball Generalized Twin Support Vector Machine (Pin-GTSVM) classifier, which removes the need for a dehazing system from the current State-of-The-Art hazy parking slot classification systems and is also insensitive to any atmospheric noise. The proposed system can seamlessly integrate with conventional smart parking infrastructures, leveraging a minimal number of cameras to monitor and manage hundreds of parking spaces efficiently. Its effectiveness has been evaluated against established parking space detection methods using the CNRPark Patches, PKLot, and a custom dataset specific to hazy parking scenarios. Furthermore, empirical results indicate a significant improvement in accuracy on a hazy parking system, thus emphasizing efficient atmospheric noise handling.
A fundamental longstanding problem in studying spin models is the efficient and accurate numerical simulation of the long-time behavior of larger systems. The exponential growth of the Hilbert space and the entanglement accumulation at long times pose major challenges for current methods. To address these issues, we employ the multilayer multiconfiguration time-dependent Hartree (ML-MCTDH) framework to simulate the many-body spin dynamics of the Heisenberg model in various settings, including the Ising and XYZ limits with different interaction ranges and random couplings. Benchmarks with analytical and exact numerical approaches show that ML-MCTDH accurately captures the time evolution of one- and two-body observables in both one- and two-dimensional lattices. A comparison with the discrete truncated Wigner approximation (DTWA) highlights that ML-MCTDH is particularly well-suited for handling anisotropic models and provides more reliable results for two-point observables across all tested cases. The behavior of the corresponding entanglement dynamics is analyzed to reveal the complexity of the quantum states. Our findings indicate that the rate of entanglement growth strongly depends on the interaction range and the presence of disorder. This particular relationship is then used to examine the convergence behavior of ML-MCTDH. Our results indicate that the multilayer structure of ML-MCTDH is a promising numerical framework for handling the dynamics of generic many-body spin systems.
In this short paper, we demonstrate using irreducible tensorial techniques and in a model-independent way, why tensor interactions and also orbital-angular momentum-changing vector interactions are absent in the singlet unpolarized differential cross section for the reaction ppΛˉΛ\overline{\mathrm{p}} \mathrm{p} \rightarrow \bar{\Lambda} \Lambda.
Decision Tree is a well understood Machine Learning model that is based on minimizing impurities in the internal nodes. The most common impurity measures are Shannon entropy and Gini impurity. These impurity measures are insensitive to the order of training data and hence the final tree obtained is invariant to any permutation of the data. This is a limitation in terms of modeling when there are temporal order dependencies between data instances. In this research, we propose the adoption of Effort-To-Compress (ETC) - a complexity measure, for the first time, as an alternative impurity measure. Unlike Shannon entropy and Gini impurity, structural impurity based on ETC is able to capture order dependencies in the data, thus obtaining potentially different decision trees for different permutations of the same data instances, a concept we term as Permutation Decision Trees (PDT). We then introduce the notion of Permutation Bagging achieved using permutation decision trees without the need for random feature selection and sub-sampling. We conduct a performance comparison between Permutation Decision Trees and classical decision trees across various real-world datasets, including Appendicitis, Breast Cancer Wisconsin, Diabetes Pima Indian, Ionosphere, Iris, Sonar, and Wine. Our findings reveal that PDT demonstrates comparable performance to classical decision trees across most datasets. Remarkably, in certain instances, PDT even slightly surpasses the performance of classical decision trees. In comparing Permutation Bagging with Random Forest, we attain comparable performance to Random Forest models consisting of 50 to 1000 trees, using merely 21 trees. This highlights the efficiency and effectiveness of Permutation Bagging in achieving comparable performance outcomes with significantly fewer trees.
Natural language as a medium for human-computer interaction has long been anticipated, has been undergoing a sea-change with the advent of Large Language Models (LLMs) with startling capacities for processing and generating language. Many of us now treat LLMs as modern-day oracles, asking it almost any kind of question. Unlike its Delphic predecessor, consulting an LLM does not have to be a single-turn activity (ask a question, receive an answer, leave); and -- also unlike the Pythia -- it is widely acknowledged that answers from LLMs can be improved with additional context. In this paper, we aim to study when we need multi-turn interactions with LLMs to successfully get a question answered; or conclude that a question is unanswerable. We present a neural symbolic framework that models the interactions between human and LLM agents. Through the proposed framework, we define incompleteness and ambiguity in the questions as properties deducible from the messages exchanged in the interaction, and provide results from benchmark problems, in which the answer-correctness is shown to depend on whether or not questions demonstrate the presence of incompleteness or ambiguity (according to the properties we identify). Our results show multi-turn interactions are usually required for datasets which have a high proportion of incompleteness or ambiguous questions; and that that increasing interaction length has the effect of reducing incompleteness or ambiguity. The results also suggest that our measures of incompleteness and ambiguity can be useful tools for characterising interactions with an LLM on question-answeringproblems
Quantum computing offers the potential for computational abilities that can go beyond classical machines. However, they are still limited by several challenges such as noise, decoherence, and gate errors. As a result, efficient classical simulation of quantum circuits is vital not only for validating and benchmarking quantum hardware but also for gaining deeper insights into the behavior of quantum algorithms. A promising framework for classical simulation is provided by tensor networks. Recently, the Density-Matrix Renormalization Group (DMRG) algorithm was developed for simulating quantum circuits using matrix product states (MPS). Although MPS is efficient for representing quantum states with one-dimensional correlation structures, the fixed linear geometry restricts the expressive power of the MPS. In this work, we extend the DMRG algorithm for simulating quantum circuits to tree tensor networks (TTNs). The framework employs a variational compression scheme that optimizes the TTN to approximate the evolved quantum state. To benchmark the method, we simulate random circuits and the quantum approximate optimization algorithm (QAOA) with various two-qubit gate connectivities. For the random circuits, we devise tree-like gate layouts that are suitable for TTN and show that TTN requires less memory than MPS for the simulations. For the QAOA circuits, a naive TTN construction that exploits graph structure significantly improves the simulation fidelities. Our findings show that the DMRG algorithm with TTNs provides a promising framework for simulating quantum circuits, particularly when gate connectivities exhibit clustering or a hierarchical structure.
We propose a theoretical framework for an adaptive learning rate policy for the Mean Absolute Error loss function and Quantile loss function and evaluate its effectiveness for regression tasks. The framework is based on the theory of Lipschitz continuity, specifically utilizing the relationship between learning rate and Lipschitz constant of the loss function. Based on experimentation, we have found that the adaptive learning rate policy enables up to 20x faster convergence compared to a constant learning rate policy.
Tensor-network methods enable probing dynamics of strongly interacting quantum many-body systems, including gauge theories, via Hamiltonian simulation, hence bypassing sign problems. They also have the potential to inform efficient quantum-simulation algorithms of the same theories. We develop and benchmark a matrix-product-state ansatz for the SU(2) lattice gauge theory using the loop-string-hadron formulation. This formulation has been demonstrated to be advantageous in Hamiltonian simulation of non-Abelian gauge theories. It is applicable to both SU(2) and SU(3) gauge groups, to periodic and open boundary conditions, and to 1+1 and higher dimensions. In this work, we report on progress in computing static and dynamical observables in a SU(2) gauge theory in (1+1)D, pushing the boundary of existing studies.
Public distrust of self-driving cars is growing. Studies emphasize the need for interpreting the behavior of these vehicles to passengers to promote trust in autonomous systems. Interpreters can enhance trust by improving transparency and reducing perceived risk. However, current solutions often lack a human-centric approach to integrating multimodal interpretations. This paper introduces a novel Human-centered Multimodal Interpreter (HMI) system that leverages human preferences to provide visual, textual, and auditory feedback. The system combines a visual interface with Bird's Eye View (BEV), map, and text display, along with voice interaction using a fine-tuned large language model (LLM). Our user study, involving diverse participants, demonstrated that the HMI system significantly boosts passenger trust in AVs, increasing average trust levels by over 8%, with trust in ordinary environments rising by up to 30%. These results underscore the potential of the HMI system to improve the acceptance and reliability of autonomous vehicles by providing clear, real-time, and context-sensitive explanations of vehicle actions.
The financialization of agricultural commodities and its impact on food security has become an increasing concern. This study empirically investigates the role of financialization in global food markets and its policy implications for a stable and secure food system. Using panel data regression models, moderating effects models, and panel regression with a threshold variable, we analyze wheat, maize, and soybean futures traded on the Chicago Board of Trade. We incorporate data on annual trading volume, open interest contracts, and their ratio. The sample consists of five developed countries (United States, Australia, Canada, France, Germany) and seven developing countries (China, Russia, India, Indonesia, Brazil, Vietnam, Thailand), covering the period 2000 to 2021. The Human Development Index (HDI) serves as a threshold variable to differentiate the impact across countries. Our findings indicate that the financialization of agricultural commodities has negatively affected global food security, with wheat and soybean showing a greater adverse impact than maize. The effects are more pronounced in developing countries. Additionally, we find that monetary policy has the potential to mitigate these negative effects. These results provide insights for policymakers to design strategies that ensure a secure and accessible global food supply.
With data sizes constantly expanding, and with classical machine learning algorithms that analyze such data requiring larger and larger amounts of computation time and storage space, the need to distribute computation and memory requirements among several computers has become apparent. Although substantial work has been done in developing distributed binary SVM algorithms and multi-class SVM algorithms individually, the field of multi-class distributed SVMs remains largely unexplored. This research proposes a novel algorithm that implements the Support Vector Machine over a multi-class dataset and is efficient in a distributed environment (here, Hadoop). The idea is to divide the dataset into half recursively and thus compute the optimal Support Vector Machine for this half during the training phase, much like a divide and conquer approach. While testing, this structure has been effectively exploited to significantly reduce the prediction time. Our algorithm has shown better computation time during the prediction phase than the traditional sequential SVM methods (One vs. One, One vs. Rest) and out-performs them as the size of the dataset grows. This approach also classifies the data with higher accuracy than the traditional multi-class algorithms.
There are no more papers matching your filters at the moment.