Kansas State University
The DESI Collaboration's second data release presents precise Baryon Acoustic Oscillation (BAO) measurements from three years of observations, yielding a 4.2 preference for a dynamical dark energy model over CDM and establishing a stringent upper bound of m < 0.064 eV on neutrino masses in a flat CDM universe. These findings refine cosmological parameters and highlight potential deviations from the standard model.
A comparative study evaluated RF-DETR and YOLOv12 for greenfruit detection in commercial apple orchards, revealing that RF-DETR excels in accuracy, especially for distinguishing occluded fruits, while YOLOv12 provides superior speed and efficiency suitable for real-time applications.
YOLO26 introduces an edge-optimized, real-time object detection model that achieves up to 43% faster CPU inference and maintains high accuracy by removing Distribution Focal Loss and implementing end-to-end NMS-free inference. It also integrates multi-task capabilities and demonstrates robust performance under quantization for diverse hardware deployments.
This review surveys how Vision-Language Models are transforming 3D object detection by bridging spatial perception with rich semantic understanding. It explores how these models enable open-vocabulary detection, zero-shot learning, and contextual reasoning, offering a fundamental shift from traditional closed-vocabulary methods.
Recent advances in AI -- including generative approaches -- have resulted in technology that can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals. The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment, that is, to make AI systems act according to our preferences. A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise. In cognitive science, human generalisation commonly involves abstraction and concept learning. In contrast, AI generalisation encompasses out-of-domain generalisation in machine learning, rule-based reasoning in symbolic AI, and abstraction in neurosymbolic AI. In this perspective paper, we combine insights from AI and cognitive science to identify key commonalities and differences across three dimensions: notions of, methods for, and evaluation of generalisation. We map the different conceptualisations of generalisation in AI and cognitive science along these three dimensions and consider their role for alignment in human-AI teaming. This results in interdisciplinary challenges across AI and cognitive science that must be tackled to provide a foundation for effective and cognitively supported alignment in human-AI teaming scenarios.
Machine-learning-based surrogate models offer significant computational efficiency and faster simulations compared to traditional numerical methods, especially for problems requiring repeated evaluations of partial differential equations. This work introduces the Geometry-Informed Neural Operator Transformer (GINOT), which integrates the transformer architecture with the neural operator framework to enable forward predictions on arbitrary geometries. GINOT employs a sampling and grouping strategy together with an attention mechanism to encode surface point clouds that are unordered, exhibit non-uniform point densities, and contain varying numbers of points for different geometries. The geometry information is seamlessly integrated with query points in the solution decoder through the attention mechanism. The performance of GINOT is validated on multiple challenging datasets, showcasing its high accuracy and strong generalization capabilities for complex and arbitrary 2D and 3D geometries.
Researchers primarily at Kansas State University are developing an interoperable knowledge graph and ontology framework for sustainable wheat production, integrating diverse data sources from farm to table. This modular system aims to provide structured data that enables data-driven insights for optimizing nitrogen and disease management while explicitly linking to global sustainability goals.
Modularity serves as a critical enabler for effectively applying large language models to complex knowledge graph and ontology engineering tasks. This approach, drawing on prior empirical work, enhances LLM reliability and performance across ontology modeling, alignment, and population.
80
The rapid rise of photorealistic images produced from Generative Adversarial Networks (GANs) poses a serious challenge for image forensics and industrial systems requiring reliable content authenticity. This paper uses frequency-domain analysis combined with deep learning to solve the problem of distinguishing StyleGAN-generated images from real ones. Specifically, a two-dimensional Discrete Fourier Transform (2D DFT) was applied to transform images into the Fourier domain, where subtle periodic artifacts become detectable. A ResNet50 neural network is trained on these transformed images to differentiate between real and synthetic ones. The experiments demonstrate that the frequency-domain model achieves a 92.8 percent and an AUC of 0.95, significantly outperforming the equivalent model trained on raw spatial-domain images. These results indicate that the GAN-generated images have unique frequency-domain signatures or "fingerprints". The method proposed highlights the industrial potential of combining signal processing techniques and deep learning to enhance digital forensics and strengthen the trustworthiness of industrial AI systems.
Researchers from Kansas State University and Lawrence Livermore National Laboratory developed a multi-stage machine learning pipeline to extract structured procedural 'recipes' from scientific PDFs, achieving an average document similarity of 87.95% to ground truth and a 70.27% F1-score for recipe extraction in nanomaterials synthesis literature.
The EpidemIQs framework introduces a multi-agent Large Language Model system to automate the entire research pipeline for network-based epidemic modeling, generating scientific reports from user prompts within minutes. It achieves a 100% completion rate across diverse scenarios and receives high human expert review scores, while significantly reducing research turnaround time and cost compared to single-agent baselines.
Keren et al. demonstrate that placing the organic superconductor κ-(BEDT-TTF)₂Cu[N(CN)₂]Br in contact with a hexagonal boron nitride (hBN) "dark" optical cavity systematically suppresses its superfluid density. This shows that the ground state properties of materials can be engineered through their electromagnetic environment, specifically via resonant coupling to vacuum electromagnetic fields, without chemical modification.
Knowledge graphs (KGs) are increasingly utilized for data integration, representation, and visualization. While KG population is critical, it is often costly, especially when data must be extracted from unstructured text in natural language, which presents challenges, such as ambiguity and complex interpretations. Large Language Models (LLMs) offer promising capabilities for such tasks, excelling in natural language understanding and content generation. However, their tendency to ``hallucinate'' can produce inaccurate outputs. Despite these limitations, LLMs offer rapid and scalable processing of natural language data, and with prompt engineering and fine-tuning, they can approximate human-level performance in extracting and structuring data for KGs. This study investigates LLM effectiveness for the KG population, focusing on the this http URL Hub Ontology. In this paper, we report that compared to the ground truth, LLM's can extract ~90% of triples, when provided a modular ontology as guidance in the prompts.
We present updated observational constraints on the spatially flat ϕ\phiCDM model, where dark energy is described by a minimally coupled scalar field ϕ\phi with an inverse power-law potential V=V0ϕαV=V_0 \phi^{-\alpha}. Using Planck 2018 CMB temperature, polarization (P18), and lensing power spectra (lensing), along with a compilation of non-CMB data including baryon acoustic oscillation, type Ia supernova, Hubble parameter, and growth rate measurements, we constrain ϕ\phiCDM and ϕ\phiCDM+ALA_L models where ALA_L is the CMB lensing consistency parameter. The scalar field parameter α\alpha, which governs dark energy dynamics, is more tightly constrained by non-CMB data than by CMB data alone. For the full dataset, we obtain α=0.055±0.041\alpha = 0.055 \pm 0.041 in the ϕ\phiCDM model and α=0.095±0.056\alpha = 0.095 \pm 0.056 in the ϕ\phiCDM+ALA_L model, mildly favoring evolving dark energy over a cosmological constant by 1.3σ1.3\sigma and 1.7σ1.7\sigma. The Hubble constant is H0=67.550.46+0.53H_0=67.55_{-0.46}^{+0.53} km s1^{-1} Mpc1^{-1} in the ϕ\phiCDM model, consistent with median statistics and some local determinations, but in tension with other local determinations. The constraints for matter density and clustering amplitude (Ωm=0.3096±0.0055\Omega_m = 0.3096 \pm 0.0055, σ8=0.80130.0067+0.0077\sigma_8 = 0.8013_{-0.0067}^{+0.0077}) of the flat ϕ\phiCDM model statistically agree with Λ\LambdaCDM model values. Allowing ALA_L to vary reduces tensions between CMB and non-CMB data, although we find AL=1.105±0.037A_L = 1.105 \pm 0.037, 2.8σ2.8\sigma higher than unity, consistent with the excess smoothing seen in Planck data. Model comparison using AIC and DIC indicates that the ϕ\phiCDM model provides a fit comparable to Λ\LambdaCDM, with the ϕ\phiCDM+ALA_L slightly preferred. Overall, while the Λ\LambdaCDM model remains an excellent fit, current data leave open the possibility of mildly evolving quintessence-like dynamical dark energy.
Neuro-Symbolic Artificial Intelligence -- the combination of symbolic methods with methods that are based on artificial neural networks -- has a long-standing history. In this article, we provide a structured overview of current trends, by means of categorizing recent publications from key conferences. The article is meant to serve as a convenient starting point for research on the general topic.
The Cosmological Principle (CP) -- the notion that the Universe is spatially isotropic and homogeneous on large scales -- underlies a century of progress in cosmology. It is conventionally formulated through the Friedmann-Lema\^itre-Robertson-Walker (FLRW) cosmologies as the spacetime metric, and culminates in the successful and highly predictive Λ\Lambda-Cold-Dark-Matter (Λ\LambdaCDM) model. Yet, tensions have emerged within the Λ\LambdaCDM model, most notably a statistically significant discrepancy in the value of the Hubble constant, H0H_0. Since the notion of cosmic expansion determined by a single parameter is intimately tied to the CP, implications of the H0H_0 tension may extend beyond Λ\LambdaCDM to the CP itself. This review surveys current observational hints for deviations from the expectations of the CP, highlighting synergies and disagreements that warrant further study. Setting aside the debate about individual large structures, potential deviations from the CP include variations of cosmological parameters on the sky, discrepancies in the cosmic dipoles, and mysterious alignments in quasar polarizations and galaxy spins. While it is possible that a host of observational systematics are impacting results, it is equally plausible that precision cosmology may have outgrown the FLRW paradigm, an extremely pragmatic but non-fundamental symmetry assumption.
CNRS logoCNRSUniversity of Pittsburgh logoUniversity of PittsburghUniversity of Waterloo logoUniversity of WaterlooSLAC National Accelerator LaboratoryChinese Academy of Sciences logoChinese Academy of SciencesUC Berkeley logoUC BerkeleyUniversity College London logoUniversity College LondonUniversity of Michigan logoUniversity of MichiganBoston University logoBoston UniversityKansas State UniversityUniversität HeidelbergThe University of Texas at DallasUniversité Paris-Saclay logoUniversité Paris-SaclayStockholm University logoStockholm UniversityLawrence Berkeley National Laboratory logoLawrence Berkeley National LaboratoryPerimeter Institute for Theoretical Physics logoPerimeter Institute for Theoretical PhysicsSorbonne Université logoSorbonne UniversitéFermi National Accelerator LaboratoryCEA logoCEAPrinceton University logoPrinceton UniversityUniversity of PortsmouthThe Ohio State University logoThe Ohio State UniversityDurham University logoDurham UniversityUniversidad Nacional Autónoma de MéxicoLawrence Livermore National LaboratorySouth African Astronomical ObservatoryUniversität PotsdamInstituto de Astrofísica de AndalucíaInstitut d’Estudis Espacials de CatalunyaCIEMATLeibniz-Institut für Astrophysik PotsdamInstitució Catalana de Recerca i Estudis AvançatsLaboratoire de Physique des 2 Infinis Irène Joliot-CurieCenter for Cosmology and AstroParticle PhysicsNOIRLabThe Oskar Klein Centre for Cosmoparticle PhysicsNational Institute for Theoretical and Computational SciencesUniversidad ECCIKavli Institute for Particle Astrophysics and CosmologyAstroparticule et CosmologieInstitut de Física d’Altes EnergiesInstitute of Space SciencesUniversidad Antonio NariñoLaboratoire de Physique Nucléaire et de Hautes EnergiesCorporación Universitaria UnihorizonteCentro de Investigaciones en Ciencias Básicas y Aplicadas (CIBCIA)Universit de ParisUniversit degli Studi di PadovaUniversit Paris CitUniversit di Roma Tor Vergata
We perform a frequentist analysis using the standard profile likelihood method for clustering measurements from Data Release 1 of the Dark Energy Spectroscopic Instrument (DESI). While Bayesian inferences for Effective Field Theory models of galaxy clustering can be highly sensitive to the choice of priors for extended cosmological models, frequentist inferences are not susceptible to such effects. We compare Bayesian and frequentist constraints for the parameter set {σ8,H0,Ωm,w0,wa}\{\sigma_8, H_0, \Omega_{\rm{m}}, w_0, w_a\} when fitting to the full-shape of the power spectrum multipoles, the post-reconstruction Baryon Acoustic Oscillation (BAO) measurements, as well as external datasets from the CMB and type Ia supernovae measurements. Bayesian prior effects are very significant for the w0waw_0w_aCDM model; while the 1σ1 \sigma frequentist confidence intervals encompass the maximum a posteriori (MAP), the Bayesian credible intervals almost always exclude the maximum likelihood estimate (MLE) and the MAP - indicating strong prior volume projection effects - unless supernovae data are included. We observe limited prior effects for the Λ\LambdaCDM model, due to the reduced number of parameters. When DESI full-shape and BAO data are jointly fit, we obtain the following 1σ1\sigma frequentist confidence intervals for Λ\LambdaCDM (w0waw_0w_aCDM): σ8=0.8670.041+0.048, H0=68.910.79+0.80 km s1Mpc1, Ωm=0.3038±0.0110\sigma_8 = 0.867^{+0.048}_{-0.041} , \ H_0 = 68.91^{+0.80}_{-0.79} \ \rm{km \ s^{-1}Mpc^{-1}} , \ \Omega_{\rm{m}} = 0.3038\pm0.0110 (σ8=0.7930.048+0.069, H0=64.92.8+4.8 km s1Mpc1, Ωm=0.3690.059+0.029\sigma_8 = 0.793^{+0.069}_{-0.048} , \ H_0 = 64.9^{+4.8}_{-2.8} \ \rm{km \ s^{-1}Mpc^{-1}} , \ \Omega_{\rm{m}} = 0.369^{+0.029}_{-0.059} , w0=0.240.64+0.17w_0 = -0.24^{+0.17}_{-0.64} , wa=2.5+1.9w_a = -2.5^{+1.9}_{}), corresponding to 0.7σ\sigma, 0.3σ\sigma, 0.7σ\sigma (1.9σ\sigma, 3.4σ\sigma, 5.6σ\sigma, 5.5σ\sigma, 5.6σ\sigma) shifts between the MLE relative to the Bayesian posterior mean for Λ\LambdaCDM (w0waw_0w_aCDM) respectively.
Broad absorption line (BAL) quasars are characterized by gas clouds that absorb flux at the wavelength of common quasar spectral features, although blueshifted by velocities that can exceed 0.1c. BAL features are interesting as signatures of significant feedback, yet they can also compromise cosmological studies with quasars by distorting the shape of the most prominent quasar emission lines, impacting redshift accuracy and measurements of the matter density distribution traced by the Lyman-alpha forest. We present a catalog of BAL quasars discovered in the Dark Energy Spectroscopic Instrument (DESI) survey Early Data Release, which were observed as part of DESI Survey Validation, as well as the first two months of the main survey. We describe our method to automatically identify BAL quasars in DESI data, the quantities we measure for each BAL, and investigate the completeness and purity of this method with mock DESI observations. We mask the wavelengths of the BAL features and re-evaluate each BAL quasar redshift, finding new redshifts which are 243 km/s smaller on average for the BAL quasar sample. These new, more accurate redshifts are important to obtain the best measurements of quasar clustering, especially at small scales. Finally, we present some spectra of rarer classes of BALs that illustrate the potential of DESI data to identify such populations for further study.
Epilepsy is a chronic neurological condition characterized by recurrent seizures, with global prevalence estimated at 50 million people worldwide. While progress in high-throughput sequencing has allowed for broad-based transcriptomic profiling of brain tissues, the deciphering of these highly complex datasets remains one of the challenges. To address this issue, in this paper we propose a new analysis pipeline that integrates the power of deep learning strategies with GPU-acceleration computation for investigating Gene expression patterns in epilepsy. Specifically, our proposed approach employs GPT-2 XL, a transformer-based Large Language Model (LLM) with 1.5 billion parameters for genomic sequence analysis over the latest NVIDIA H100 Tensor Core GPUs based on Hopper architecture. Our proposed method enables efficient preprocessing of RNA sequence data, gene sequence encoding, and subsequent pattern identification. We conducted experiments on two epilepsy datasets including GEO accession GSE264537 and GSE275235. The obtained results reveal several significant transcriptomic modifications, including reduced hippocampal astrogliosis after ketogenic diet treatment as well as restored excitatory-inhibitory signaling equilibrium in zebrafish epilepsy model. Moreover, our results highlight the effectiveness of leveraging LLMs in combination with advanced hardware acceleration for transcriptomic characterization in neurological diseases.
KnowWhereGraph is one of the largest fully publicly available geospatial knowledge graphs. It includes data from 30 layers on natural hazards (e.g., hurricanes, wildfires), climate variables (e.g., air temperature, precipitation), soil properties, crop and land-cover types, demographics, and human health, various place and region identifiers, among other themes. These have been leveraged through the graph by a variety of applications to address challenges in food security and agricultural supply chains; sustainability related to soil conservation practices and farm labor; and delivery of emergency humanitarian aid following a disaster. In this paper, we introduce the ontology that acts as the schema for KnowWhereGraph. This broad overview provides insight into the requirements and design specifications for the graph and its schema, including the development methodology (modular ontology modeling) and the resources utilized to implement, materialize, and deploy KnowWhereGraph with its end-user interfaces and public query SPARQL endpoint.
There are no more papers matching your filters at the moment.