University of Missouri/Columbia
The large and ever-increasing amount of data available on the Internet coupled with the laborious task of manual claim and fact verification has sparked the interest in the development of automated claim verification systems. Several deep learning and transformer-based models have been proposed for this task over the years. With the introduction of Large Language Models (LLMs) and their superior performance in several NLP tasks, we have seen a surge of LLM-based approaches to claim verification along with the use of novel methods such as Retrieval Augmented Generation (RAG). In this survey, we present a comprehensive account of recent claim verification frameworks using LLMs. We describe the different components of the claim verification pipeline used in these frameworks in detail including common approaches to retrieval, prompting, and fine-tuning. Finally, we describe publicly available English datasets created for this task.
· +3
Here, we present the outcomes from the second Large Language Model (LLM) Hackathon for Applications in Materials Science and Chemistry, which engaged participants across global hybrid locations, resulting in 34 team submissions. The submissions spanned seven key application areas and demonstrated the diverse utility of LLMs for applications in (1) molecular and material property prediction; (2) molecular and material design; (3) automation and novel interfaces; (4) scientific communication and education; (5) research data management and automation; (6) hypothesis generation and evaluation; and (7) knowledge extraction and reasoning from scientific literature. Each team submission is presented in a summary table with links to the code and as brief papers in the appendix. Beyond team results, we discuss the hackathon event and its hybrid format, which included physical hubs in Toronto, Montreal, San Francisco, Berlin, Lausanne, and Tokyo, alongside a global online hub to enable local and virtual collaboration. Overall, the event highlighted significant improvements in LLM capabilities since the previous year's hackathon, suggesting continued expansion of LLMs for applications in materials science and chemistry research. These outcomes demonstrate the dual utility of LLMs as both multipurpose models for diverse machine learning tasks and platforms for rapid prototyping custom applications in scientific research.
Powerful generative AI models of protein-ligand structure have recently been proposed, but few of these methods support both flexible protein-ligand docking and affinity estimation. Of those that do, none can directly model multiple binding ligands concurrently or have been rigorously benchmarked on pharmacologically relevant drug targets, hindering their widespread adoption in drug discovery efforts. In this work, we propose FlowDock, the first deep geometric generative model based on conditional flow matching that learns to directly map unbound (apo) structures to their bound (holo) counterparts for an arbitrary number of binding ligands. Furthermore, FlowDock provides predicted structural confidence scores and binding affinity values with each of its generated protein-ligand complex structures, enabling fast virtual screening of new (multi-ligand) drug targets. For the well-known PoseBusters Benchmark dataset, FlowDock outperforms single-sequence AlphaFold 3 with a 51% blind docking success rate using unbound (apo) protein input structures and without any information derived from multiple sequence alignments, and for the challenging new DockGen-E dataset, FlowDock outperforms single-sequence AlphaFold 3 and matches single-sequence Chai-1 for binding pocket generalization. Additionally, in the ligand category of the 16th community-wide Critical Assessment of Techniques for Structure Prediction (CASP16), FlowDock ranked among the top-5 methods for pharmacological binding affinity estimation across 140 protein-ligand complexes, demonstrating the efficacy of its learned representations in virtual screening. Source code, data, and pre-trained models are available at this https URL
110
Researchers from the University of Missouri-Columbia and New York University developed a multicrossmodal AI agent that integrates diverse materials science data, including videos, images, text, and structured data. The agent achieved an 85% Recall@K in cross-modal retrieval and demonstrated a +35% increase in integrated information coverage, creating comprehensive scientific narratives by unifying insights from disparate sources.
Hidden convexity is a powerful idea in optimization: under the right transformations, nonconvex problems that are seemingly intractable can be solved efficiently using convex optimization. We introduce the notion of a Lagrangian dual section of a nonlinear program defined over a topological space, and we use it to give a sufficient condition for a nonconvex optimization problem to have a natural convex reformulation. We emphasize the topological nature of our framework, using only continuity and connectedness properties of a certain Lagrangian formulation of the problem to prove our results. We demonstrate the practical consequences of our framework in a range of applications and by developing new algorithmic methodology. First, we present families of nonconvex problem instances that can be transformed to convex programs in the context of spectral inverse problems -- which include quadratically constrained quadratic optimization and Stiefel manifold optimization as special cases -- as well as unbalanced Procrustes problems. In each of these applications, we both generalize prior results on hidden convexity and provide unifying proofs. For the case of the spectral inverse problems, we also present a Lie-theoretic approach that illustrates connections with the Kostant convexity theorem. Second, we introduce new algorithmic ideas that can be used to find globally optimal solutions to both Lagrangian forms of an optimization problem as well as constrained optimization problems when the underlying topological space is a Riemannian manifold.
StockGPT is a generative AI model pretrained directly on numeric stock return data, a departure from text-based approaches in financial AI. It demonstrates strong out-of-sample performance, generating a 119% annualized return with a Sharpe ratio of 6.5 for daily rebalanced equal-weighted portfolios, and identifies a novel AI-driven pricing effect that produces significant alpha unexplained by traditional market factors.
Virtual Reality (VR) is quickly establishing itself in various industries, including training, education, medicine, and entertainment, in which users are frequently required to carry out multiple complex cognitive and physical activities. However, the relationship between cognitive activities, physical activities, and familiar feelings of cybersickness is not well understood and thus can be unpredictable for developers. Researchers have previously provided labeled datasets for predicting cybersickness while users are stationary, but there have been few labeled datasets on cybersickness while users are physically walking. Thus, from 39 participants, we collected head orientation, head position, eye tracking, images, physiological readings from external sensors, and the self-reported cybersickness severity, physical load, and mental load in VR. Throughout the data collection, participants navigated mazes via real walking and performed tasks challenging their attention and working memory. To demonstrate the dataset's utility, we conducted a case study of training classifiers in which we achieved 95% accuracy for cybersickness severity classification. The noteworthy performance of the straightforward classifiers makes this dataset ideal for future researchers to develop cybersickness detection and reduction models. To better understand the features that helped with classification, we performed SHAP(SHapley Additive exPlanations) analysis, highlighting the importance of eye tracking and physiological measures for cybersickness prediction while walking. This open dataset can allow future researchers to study the connection between cybersickness and cognitive loads and develop prediction models. This dataset will empower future VR developers to design efficient and effective Virtual Environments by improving cognitive load management and minimizing cybersickness.
Researchers at the University of Missouri-Columbia outline a research agenda for developing causal Large Language Model agents in biomedicine. These agents would integrate multimodal data and perform intervention-based reasoning, aiming to overcome the correlational limitations of current LLMs by fostering a deeper understanding of causal relationships in scientific discovery and clinical decision-making.
Microscopy is a primary source of information on materials structure and functionality at nanometer and atomic scales. The data generated is often well-structured, enriched with metadata and sample histories, though not always consistent in detail or format. The adoption of Data Management Plans (DMPs) by major funding agencies promotes preservation and access. However, deriving insights remains difficult due to the lack of standardized code ecosystems, benchmarks, and integration strategies. As a result, data usage is inefficient and analysis time is extensive. In addition to post-acquisition analysis, new APIs from major microscope manufacturers enable real-time, ML-based analytics for automated decision-making and ML-agent-controlled microscope operation. Yet, a gap remains between the ML and microscopy communities, limiting the impact of these methods on physics, materials discovery, and optimization. Hackathons help bridge this divide by fostering collaboration between ML researchers and microscopy experts. They encourage the development of novel solutions that apply ML to microscopy, while preparing a future workforce for instrumentation, materials science, and applied ML. This hackathon produced benchmark datasets and digital twins of microscopes to support community growth and standardized workflows. All related code is available at GitHub: this https URL
Large Language Models (LLMs) have demonstrated remarkable capabilities in various reasoning and generation tasks. However, their proficiency in complex causal reasoning, discovery, and estimation remains an area of active development, often hindered by issues like hallucination, reliance on spurious correlations, and difficulties in handling nuanced, domain-specific, or personalized causal relationships. Multi-agent systems, leveraging the collaborative or specialized abilities of multiple LLM-based agents, are emerging as a powerful paradigm to address these limitations. This review paper explores the burgeoning field of causal multi-agent LLMs. We examine how these systems are designed to tackle different facets of causality, including causal reasoning and counterfactual analysis, causal discovery from data, and the estimation of causal effects. We delve into the diverse architectural patterns and interaction protocols employed, from pipeline-based processing and debate frameworks to simulation environments and iterative refinement loops. Furthermore, we discuss the evaluation methodologies, benchmarks, and diverse application domains where causal multi-agent LLMs are making an impact, including scientific discovery, healthcare, fact-checking, and personalized systems. Finally, we highlight the persistent challenges, open research questions, and promising future directions in this synergistic field, aiming to provide a comprehensive overview of its current state and potential trajectory.
Automated pavement monitoring using computer vision can analyze pavement conditions more efficiently and accurately than manual methods. Accurate segmentation is essential for quantifying the severity and extent of pavement defects and consequently, the overall condition index used for prioritizing rehabilitation and maintenance activities. Deep learning-based segmentation models are however, often supervised and require pixel-level annotations, which can be costly and time-consuming. While the recent evolution of zero-shot segmentation models can generate pixel-wise labels for unseen classes without any training data, they struggle with irregularities of cracks and textured pavement backgrounds. This research proposes a zero-shot segmentation model, PaveSAM, that can segment pavement distresses using bounding box prompts. By retraining SAM's mask decoder with just 180 images, pavement distress segmentation is revolutionized, enabling efficient distress segmentation using bounding box prompts, a capability not found in current segmentation models. This not only drastically reduces labeling efforts and costs but also showcases our model's high performance with minimal input, establishing the pioneering use of SAM in pavement distress segmentation. Furthermore, researchers can use existing open-source pavement distress images annotated with bounding boxes to create segmentation masks, which increases the availability and diversity of segmentation pavement distress datasets.
A plethora of recent research has proposed several automated methods based on machine learning (ML) and deep learning (DL) to detect cybersickness in Virtual reality (VR). However, these detection methods are perceived as computationally intensive and black-box methods. Thus, those techniques are neither trustworthy nor practical for deploying on standalone VR head-mounted displays (HMDs). This work presents an explainable artificial intelligence (XAI)-based framework VR-LENS for developing cybersickness detection ML models, explaining them, reducing their size, and deploying them in a Qualcomm Snapdragon 750G processor-based Samsung A52 device. Specifically, we first develop a novel super learning-based ensemble ML model for cybersickness detection. Next, we employ a post-hoc explanation method, such as SHapley Additive exPlanations (SHAP), Morris Sensitivity Analysis (MSA), Local Interpretable Model-Agnostic Explanations (LIME), and Partial Dependence Plot (PDP) to explain the expected results and identify the most dominant features. The super learner cybersickness model is then retrained using the identified dominant features. Our proposed method identified eye tracking, player position, and galvanic skin/heart rate response as the most dominant features for the integrated sensor, gameplay, and bio-physiological datasets. We also show that the proposed XAI-guided feature reduction significantly reduces the model training and inference time by 1.91X and 2.15X while maintaining baseline accuracy. For instance, using the integrated sensor dataset, our reduced super learner model outperforms the state-of-the-art works by classifying cybersickness into 4 classes (none, low, medium, and high) with an accuracy of 96% and regressing (FMS 1-10) with a Root Mean Square Error (RMSE) of 0.03.
Traffic safety is a major global concern. Helmet usage is a key factor in preventing head injuries and fatalities caused by motorcycle accidents. However, helmet usage violations continue to be a significant problem. To identify such violations, automatic helmet detection systems have been proposed and implemented using computer vision techniques. Real-time implementation of such systems is crucial for traffic surveillance and enforcement, however, most of these systems are not real-time. This study proposes a robust real-time helmet violation detection system. The proposed system utilizes a unique data processing strategy, referred to as few-shot data sampling, to develop a robust model with fewer annotations, and a single-stage object detection model, YOLOv8 (You Only Look Once Version 8), for detecting helmet violations in real-time from video frames. Our proposed method won 7th place in the 2023 AI City Challenge, Track 5, with an mAP score of 0.5861 on experimental validation data. The experimental results demonstrate the effectiveness, efficiency, and robustness of the proposed system.
Vector autoregression model is ubiquitous in classical time series data analysis. With the rapid advance of social network sites, time series data over latent graph is becoming increasingly popular. In this paper, we develop a novel Bayesian grouped network autoregression model to simultaneously estimate group information (number of groups and group configurations) and group-wise parameters. Specifically, a graphically assisted Chinese restaurant process is incorporated under framework of the network autoregression model to improve the statistical inference performance. An efficient Markov chain Monte Carlo sampling algorithm is used to sample from the posterior distribution. Extensive studies are conducted to evaluate the finite sample performance of our proposed methodology. Additionally, we analyze two real datasets as illustrations of the effectiveness of our approach.
We report an extremely tight, linear relation between Hα{\rm H\alpha} and [O III] line fluxes in logarithm, discovered using a large sample of low and mid-resolution spectra (totaling 563) obtained by the James Webb Space Telescope (JWST) NIRSpec instrument in three widely separated extragalactic fields. While a certain correlation between Hα{\rm H\alpha} and [O III] is expected for star forming galaxies, such a log-linear and tight (dispersion of \sim0.1 dex) and trend is hard to explain because dust reddening would skew any intrinsic relation between the two. Furthermore, another surprising finding emerges from investigating the dust reddening properties of these galaxies. We find that the classic method of using the Balmer decrements under the standard Case B assumption does not work: a high fraction (\sim40\%) of our objects have Hα{\rm H\alpha}/Hβ{\rm H\beta} line ratios even smaller than the canonical Case B ratio of 2.86. Such a high fraction of non-Case B Balmer decrements is also present in other JWST and ground-based spectroscopic studies, but the universal applicability of the Case B assumption was not questioned until recently. The mysterious Hα{\rm H\alpha}--[O III] correlation and the high fraction of non-Case B Balmer decrements, which may or may not be related, should be further investigated to put our spectral analysis onto a more solid footing.
10 Apr 2015
In multi-parameter models, reference priors typically depend on the parameter or quantity of interest, and it is well known that this is necessary to produce objective posterior distributions with optimal properties. There are, however, many situations where one is simultaneously interested in all the parameters of the model or, more realistically, in functions of them that include aspects such as prediction, and it would then be useful to have a single objective prior that could safely be used to produce reasonable posterior inferences for all the quantities of interest. In this paper, we consider three methods for selecting a single objective prior and study, in a variety of problems including the multinomial problem, whether or not the resulting prior is a reasonable overall prior.
In this paper we argue that conventional unitary-invariant measures of recommender system (RS) performance based on measuring differences between predicted ratings and actual user ratings fail to assess fundamental RS properties. More specifically, posing the optimization problem as one of predicting exact user ratings provides only an indirect suboptimal approximation for what RS applications typically need, which is an ability to accurately predict user preferences. We argue that scalar measures such as RMSE and MAE with respect to differences between actual and predicted ratings are only proxies for measuring RS ability to accurately estimate user preferences. We propose what we consider to be a measure that is more fundamentally appropriate for assessing RS performance, rank-preference consistency, which simply counts the number of prediction pairs that are inconsistent with the user's expressed product preferences. For example, if an RS predicts the user will prefer product A over product B, but the user's withheld ratings indicate s/he prefers product B over A, then rank-preference consistency has been violated. Our test results conclusively demonstrate that methods tailored to optimize arbitrary measures such as RMSE are not generally effective at accurately predicting user preferences. Thus, we conclude that conventional methods used for assessing RS performance are arbitrary and misleading.
Ephemeral gullies are a primary cause of soil erosion and their reliable, accurate, and early detection will facilitate significant improvements in the sustainability of global agricultural systems. In our view, prior research has not successfully addressed automated detection of ephemeral gullies from remotely sensed images, so for the first time, we present and evaluate three successful pipelines for ephemeral gully detection. Our pipelines utilize remotely sensed images, acquired from specific agricultural areas over a period of time. The pipelines were tested with various choices of Visual Language Models (VLMs), and they classified the images based on the presence of ephemeral gullies with accuracy higher than 70% and a F1-score close to 80% for positive gully detection. Additionally, we developed the first public dataset for ephemeral gully detection, labeled by a team of soil- and plant-science experts. To evaluate the proposed pipelines, we employed a variety of zero-shot classification methods based on State-of-the-Art (SOTA) open-source Vision-Language Models (VLMs). In addition to that, we compare the same pipelines with a transfer learning approach. Extensive experiments were conducted to validate the detection pipelines and to analyze the impact of hyperparameter changes in their performance. The experimental results demonstrate that the proposed zero-shot classification pipelines are highly effective in detecting ephemeral gullies in a scenario where classification datasets are scarce.
Cosmic hydrogen reionization and cosmic production of first metals are major phase transitions of the universe occurring during the first billion years after the Big Bang, however these are still underexplored observationally. Using the JWST NIRSpec prism spectroscopy, we report the discovery of a sub-LL_\ast galaxy at zspec=8.1623±0.0007z_{\rm spec}=8.1623\pm0.0007, dubbed RXJ2129-z8HeII, via the detection of a series of strong rest-frame UV/optical nebular emission lines and the clear Lyman break. RXJ2129-z8HeII shows a pronounced UV continuum with an extremely steep (i.e. blue) spectral slope of β=2.530.07+0.06\beta=-2.53_{-0.07}^{+0.06}, the steepest amongst all spectroscopically confirmed galaxies at zspec7z_{\rm spec}\gtrsim7, in support of its very hard ionizing spectrum that could lead to a significant leakage of its ionizing flux. Therefore, RXJ2129-z8HeII is representative of the key galaxy population driving the cosmic reionization. More importantly, we detect a strong He II λ\lambda1640 emission line in its spectrum, one of the highest redshifts at which such a line is robustly detected. Its high rest-frame equivalent width (EW=21±4{\rm EW}=21\pm4 Angstrom) and extreme flux ratios with respect to UV metal and Balmer lines raise the possibility that part of RXJ2129-z8HeII's stellar populations could be Pop III-like. Through careful photoionization modeling, we show that the physically calibrated phenomenological models of the ionizing spectra of Pop III stars with strong mass loss can successfully reproduce the emission line flux ratios observed in RXJ2129-z8HeII. Assuming the Eddington limit, the total mass of the Pop III stars within this system is estimated to be 7.8±1.4×105M7.8\pm1.4\times10^5 M_\odot. To date, this galaxy presents the most compelling case in the early universe where trace Pop III stars might coexist with metal-enriched populations.
There are no more papers matching your filters at the moment.