Virtually every biological rate changes with temperature, but the mechanisms underlying these responses differ between different processes. Here, we bring together the main theoretical approaches used to describe temperature-rate relationships, ranging from empirical curve shapes to reaction-level kinetics and network-based dynamical frameworks. These models highlight how temperature influences not only the speed of elementary reactions, but also the behavior that emerges when many reactions interact through regulation, feedback, or stochastic transitions. By outlining the assumptions and implications of each perspective, we aim to clarify how different modeling strategies connect molecular processes to physiological temperature response curves and to point toward integrative frameworks that can better explain the diversity of biological thermal responses.
Neurons, as eukaryotic cells, have powerful internal computation capabilities. One neuron can have many distinct states, and brains can use this capability. Processes of neuron growth and maintenance use chemical signalling between cell bodies and synapses, ferrying chemical messengers over microtubules and actin fibres within cells. These processes are computations which, while slower than neural electrical signalling, could allow any neuron to change its state over intervals of seconds or minutes. Based on its state, a single neuron can selectively de-activate some of its synapses, sculpting a dynamic neural net from the static neural connections of the brain. Without this dynamic selection, the static neural networks in brains are too amorphous and dilute to do the computations of neural cognitive models. The use of multi-state neurons in animal brains is illustrated in hierarchical Bayesian object recognition. Multi-state neurons may support a design which is more efficient than two-state neurons, and scales better as object complexity increases. Brains could have evolved to use multi-state neurons. Multi-state neurons could be used in artificial neural networks, to use a kind of non-Hebbian learning which is faster and more focused and controllable than traditional neural net learning. This possibility has not yet been explored in computational models.
Accurately predicting individual neurons' responses and spatial functional properties in complex visual tasks remains a key challenge in understanding neural computation. Existing whole-brain connectome models of Drosophila often rely on parameter assumptions or deep learning approaches, yet remain limited in their ability to reliably predict dynamic neuronal responses. We introduce a Multi-Path Aggregation (MPA) framework, based on neural network steady-state theory, to build a whole-brain Visual Function Profiles (VFP) of Drosophila neurons and predict their responses under diverse visual tasks. Unlike conventional methods relying on redundant parameters, MPA combines visual input features with the whole-brain connectome topology. It uses adjacency matrix powers and finite-path optimization to efficiently predict neuronal function, including ON/OFF polarity, direction selectivity, and responses to complex visual stimuli. Our model achieves a Pearson correlation of 0.84+/-0.12 for ON/OFF responses, outperforming existing methods (0.33+/-0.59), and accurately captures neuron functional properties, including luminance and direction preferences, while allowing single-neuron or population-level blockade simulations. Replacing CNN modules with VFP-derived Lobula Columnar(LC) population responses in a Drosophila simulation enables successful navigation and obstacle avoidance, demonstrating the model's effectiveness in guiding embodied behavior. This study establishes a "connectome-functional profile-behavior" framework, offering a whole-brain quantitative tool to study Drosophila visual computation and a neuron-level guide for brain-inspired intelligence.
DEAD-box RNA helicases (DDXs) are essential RNA metabolism regulators that typically unwind dsRNA in an ATP-dependent manner. However, recent studies show some DDXs can also unwind dsRNA without ATP, a phenomenon that remains poorly understood. Here, we developed HelixTriad coarse-grained RNA model, incorporating Watson-Crick base pairing, base stacking, and electrostatics within a three-bead-per-nucleotide scheme to accurately reproduce experimental RNA melting curves. Molecular dynamics simulations showed that weak, specific DDX3X-dsRNA interactions drive stochastic strand separation without ATP. Free energy analysis revealed that successful unwinding via high-entropy, stand-displacing intermediates. Furthermore, we introduced Entropy-Unet, a deep learning framework for entropy prediction, which corroborated theoretical estimates and uncovered a hierarchical pattern of entropy contributions. Together, our findings suggest that ATP-independent dsRNA unwinding by DDXs is predominantly entropy-driven, offering new mechanistic insights into RNA helicases versatility.
The advancement of human healthspan and bioengineering relies heavily on predicting the behavior of complex biological systems. While high-throughput multiomics data is becoming increasingly abundant, converting this data into actionable predictive models remains a bottleneck. High-capacity, datadriven simulation systems are critical in this landscape; unlike classical mechanistic models restricted by prior knowledge, these architectures can infer latent interactions directly from observational data, allowing for the simulation of temporal trajectories and the anticipation of downstream intervention effects in personalized medicine and synthetic biology. To address this challenge, we introduce Neural Ordinary Differential Equations (NODEs) as a dynamic framework for learning the complex interplay between the proteome and metabolome. We applied this framework to time-series data derived from engineered Escherichia coli strains, modeling the continuous dynamics of metabolic pathways. The proposed NODE architecture demonstrates superior performance in capturing system dynamics compared to traditional machine learning pipelines. Our results show a greater than 90% improvement in root mean squared error over baselines across both Limonene (up to 94.38% improvement) and Isopentenol (up to 97.65% improvement) pathway datasets. Furthermore, the NODE models demonstrated a 1000x acceleration in inference time, establishing them as a scalable, high-fidelity tool for the next generation of metabolic engineering and biological discovery.
Enzymes are crucial catalysts that enable a wide range of biochemical reactions. Efficiently identifying specific enzymes from vast protein libraries is essential for advancing biocatalysis. Traditional computational methods for enzyme screening and retrieval are time-consuming and resource-intensive. Recently, deep learning approaches have shown promise. However, these methods focus solely on the interaction between enzymes and reactions, overlooking the inherent hierarchical relationships within each domain. To address these limitations, we introduce FGW-CLIP, a novel contrastive learning framework based on optimizing the fused Gromov-Wasserstein distance. FGW-CLIP incorporates multiple alignments, including inter-domain alignment between reactions and enzymes and intra-domain alignment within enzymes and reactions. By introducing a tailored regularization term, our method minimizes the Gromov-Wasserstein distance between enzyme and reaction spaces, which enhances information integration across these domains. Extensive evaluations demonstrate the superiority of FGW-CLIP in challenging enzyme-reaction tasks. On the widely-used EnzymeMap benchmark, FGW-CLIP achieves state-of-the-art performance in enzyme virtual screening, as measured by BEDROC and EF metrics. Moreover, FGW-CLIP consistently outperforms across all three splits of ReactZyme, the largest enzyme-reaction benchmark, demonstrating robust generalization to novel enzymes and reactions. These results position FGW-CLIP as a promising framework for enzyme discovery in complex biochemical settings, with strong adaptability across diverse screening scenarios.
We develop a continuous mathematical model of population dynamics that describes the sequential emergence of new genotypes under limited resources. The framework models genotype density as a nonlinear flow in mutation space, combining transport driven by a time-dependent mutation rate with logistic growth and nonlocal competition. For the advection-reaction regime without reverse mutations, we derive analytical solutions using the method of characteristics and obtain explicit expressions for time-varying carrying capacities and mutation velocities. We analyze how decaying and accelerating mutation rates shape the saturation and propagation of population fronts through level-set geometry. When reverse mutations are included, the system becomes a quasilinear parabolic equation with diffusion in genotype space; numerical experiments show that backward mutation flows stabilize the dynamics and smooth the evolving fronts. The proposed model generalizes classical quasispecies and Crow-Kimura formulations by incorporating logistic regulation, variable mutation rates, and reversible transitions, offering a unified approach to evolutionary processes relevant to virology, bacterial adaptation, and tumor progression.
Bacteriophage-bacteria interactions are central to microbial ecology, influencing evolution, biogeochemical cycles, and pathogen behavior. Most theoretical models assume static environments and passive bacterial hosts, neglecting the joint effects of bacterial traits and environmental fluctuations on coexistence dynamics. This limitation hinders the prediction of microbial persistence in dynamic ecosystems such as soils and this http URL a minimal ordinary differential equation framework, we show that the bacterial growth rate and the phage adsorption rate collectively determine three possible ecological outcomes: phage extinction, stable coexistence, or oscillation-induced extinction. Specifically, we demonstrate that environmental fluctuations can suppress destructive oscillations through resonance, promoting coexistence where static models otherwise predict collapse. Counterintuitively, we find that lower bacterial growth rates are helpful in enhancing survival under high infection pressure, elucidating the observed post-infection growth this http URL studies reframe bacterial hosts as active builders of ecological dynamics and environmental variation as a potential stabilizing force. Our findings thus bridge a key theory-experiment gap and provide a foundational framework for predicting microbial responses to environmental stress, which might have potential implications for phage therapy, microbiome management, and climate-impacted community resilience.
Finite and infinite population models are frequently used in population dynamics. However, their interrelationship is rarely discussed. In this work, we examine the limits of large populations of the Moran process (a finite-population birth-death process) and the replicator equation (an ordinary differential equation) as paradigmatic examples of finite and infinite population models, respectively, both of which are extensively used in population genetics. Except for certain degenerate cases, we completely characterize when these models exhibit similar dynamics, i.e., when there is a one-to-one relation between the stable attractors of the replicator equations and the metastable states of the Moran process. To achieve this goal, we first show that the asymptotic expression for the fixation probability in the Moran process, when the population size is large and individual interaction is almost arbitrary (including cases modeled through dd-player game theory), is a convex combination of the asymptotic approximations obtained in the constant fitness case or 2-player game theory. We discuss several examples and the inverse problem, i.e., how to derive a Moran process that is compatible with a given replicator dynamics. In particular, we prove that modeling a Moran process with an inner metastable state may require the use of dd-player game theory with possibly large dd values, depending on the precise location of the inner equilibrium.
Background and objective: Spatial transcriptomics provides rich spatial context but lacks sufficient resolution for large-scale causal inference. We developed SpeF-Phixer, a spatially extended phi-mixing framework integrating whole-slide image (WSI)-derived spatial cell distributions with mapped scRNA-seq expression fields to infer directed gene regulatory triplets with spatial coherence. Methods: Using CD103/CD8-immunostained colorectal cancer WSIs and publicly available scRNA-seq datasets, spatial gene fields were constructed around mapped cells and discretized for signed phi-mixing computation. Pairwise dependencies, directional signs, and triplet structures were evaluated through kNN-based neighborhood screening and bootstrap consensus inference. Mediation and convergence were distinguished using generalized additive models (GAMs), with spatial validity assessed by real-null comparisons and database-backed direction checks. Results: Across tissue patches, the pipeline reduced approximately 3.6x10^4 triplet candidates to a reproducible consensus set (approximately 3x10^2 per patch). The downstream edge (Y to Z) showed significant directional bias consistent with curated regulatory databases. Spatial path tracing demonstrated markedly higher coherence for real triplets than for null controls, indicating that inferred chains represent biologically instantiated regulatory flows. Conclusion: SpeF-Phixer extracts spatially coherent, directionally consistent gene regulatory triplets from histological images. This framework bridges single-cell molecular profiles with microenvironmental organization and provides a scalable foundation for constructing spatially informed causal gene networks.
Background: Adolescence is a critical period of brain maturation and heightened vulnerability to cognitive and mental health disorders. Sleep plays a vital role in neurodevelopment, yet the mechanisms linking insufficient sleep to adverse brain and behavioral outcomes remain unclear. The glymphatic system (GS), a brain-wide clearance pathway, may provide a key mechanistic link. Methods: Participants from the Adolescent Brain Cognitive Development (ABCD) Study (n =6,800; age ~ 11 years) were categorized into sleep-sufficient (>=9 h/night) and sleep-insufficient (<9 h/night) groups. Linear models tested associations among sleep, PVS burden, brain volumes, and behavioral outcomes. Mediation analyses evaluated whether PVS burden explained sleep-related effects. Results: Adolescents with insufficient sleep exhibited significantly greater PVS burden, reduced cortical, subcortical, and white matter volumes, poorer cognitive performance across multiple domains (largest effect in crystallized intelligence), and elevated psychopathology (largest effect in general problems). Sleep duration and quality were strongly associated with PVS burden. Mediation analyses revealed that PVS burden partially mediated sleep effects on cognition and mental health, with indirect proportions up to 10.9%. Sequential models suggested a pathway from sleep -> PVS -> brain volume -> behavior as the most plausible route. Conclusions: Insufficient sleep during adolescence is linked to glymphatic dysfunction, reflected by increased PVS burden, which partially accounts for adverse effects on brain structure, cognition, and mental health. These findings highlight the GS as a potential mechanistic pathway and imaging biomarker, underscoring the importance of promoting adequate sleep to support neurodevelopment and mental health.
In any ecosystem, the conditions of the environment and the characteristics of the species that inhabit it are entangled, co-evolving in space and time. We introduce a model that couples active agents with a dynamic environment, interpreted as a nutrient source. Agents are persistent random walkers that gather food from the environment and store it in an inner energy depot. This energy is used for self-propulsion, metabolic expenses, and reproduction. The environment is a two-dimensional surface divided into patches, each of them producing food. Thus, population size and resource distribution become emergent properties of the system. Combining simulations and analytical framework to analyze limiting cases, we show that the system exhibits distinct phases separating quasi-static and highly motile regimes. We observe that, in general, population sizes are inversely proportional to the average energy per agent. Furthermore, we find that, counter-intuitively, reduced access to resources or increased metabolic expenditure can lead to a larger population size. The proposed theoretical framework provides a link between active matter and movement ecology, allowing to investigate short vs long-term strategies to resource exploitation and rationing, as well as sedentary vs wandering strategy. The introduced approach may serve as a tool to describe real-world ecological systems and to test environmental strategies to prevent species extinction.
Lung cancer is a primary contributor to cancer-related mortality globally, highlighting the necessity for precise early detection of pulmonary nodules through low-dose CT (LDCT) imaging. Deep learning methods have improved nodule detection and classification; however, their performance is frequently limited by the availability of annotated data and variability among imaging centers. This research presents a CT-driven, semi-supervised framework utilizing the Inf-Net architecture to enhance lung nodule analysis with minimal annotation. The model incorporates multi-scale feature aggregation, Reverse Attention refinement, and pseudo-labeling to efficiently utilize unlabeled CT slices. Experiments conducted on subsets of the LUNA16 dataset indicate that the supervised Inf-Net attains a score of 0.825 on 10,000 labeled slices. In contrast, the semi-supervised variant achieves a score of 0.784 on 20,000 slices that include both labeled and pseudo-labeled data, thus surpassing its supervised baseline of 0.755. This study presents a conceptual framework for the integration of genomic biomarkers with CT-derived features, facilitating the development of future multimodal, biologically informed CAD systems. The proposed semi-supervised Inf-Net framework improves CT-based lung nodule assessment and lays the groundwork for flexible multi-omics diagnostic models.
Recent mosquito-borne outbreaks have revealed vulnerabilities in our abatement programmes, raising concerns about how abatement-districts should choose optimal future control strategies. Spatial dissemination of vector-borne disease is strongly shaped by the movement of both hosts and mosquitoes, creating substantial overlap between vector activity and pathogen spread. We developed a mathematical model for Culex mosquito dynamics in a patchy landscape, integrating entomological observations, weather-driven factors, and the vector control practices of the Northwest Mosquito Abatement District (NWMAD) in Cook County, Illinois. By coupling a temperature-driven multi-patch ODE model with NWMAD's adulticide and larvicide interventions, we investigated how spatial heterogeneity and control timing influence mosquito abundance. We also evaluated how mosquito dispersal modifies intervention effectiveness by comparing single-patch and two-patch model outcomes. Our results showed that models ignoring spatial connectivity can substantially overestimate the impact of interventions or misidentify the thresholds of vector persistence. Through numerical simulations, we analysed continuous and pulsatile control approaches under varying spatial and temporal configurations. These findings provide insight into optimal strategies for managing Culex populations and mitigating mosquito-borne disease risk in weather-driven, spatially connected environments across Cook County, Illinois.
Intracellular compartmentalization of proteins underpins their function and the metabolic processes they sustain. Various mass spectrometry-based proteomics methods (subcellular spatial proteomics) now allow high throughput subcellular protein localization. Yet, the curation, analysis and interpretation of these data remain challenging, particularly in non-model organisms where establishing reliable marker proteins is difficult, and in contexts where experimental replication and subcellular fractionation are constrained. Here, we develop FSPmix, a semi-supervised functional clustering method implemented as an open-source R package, which leverages partial annotations from a subset of marker proteins to predict protein subcellular localization from subcellular spatial proteomics data. This method explicitly assumes that protein signatures vary smoothly across subcellular fractions, enabling more robust inference under low signal-to-noise data regimes. We applied FSPmix to a subcellular proteomics dataset from a marine diatom, allowing us to assign probabilistic localizations to proteins and uncover potentially new protein functions. Altogether, this work lays the foundation for more robust statistical analysis and interpretation of subcellular proteomics datasets, particularly in understudied organisms.
Within a continuous-time, stochastic model of single-cell size homeostasis, we study how the structure of feedback from size to growth rates and cell-cycle progression shapes overall size dynamics, both within and across cell cycles. We focus on a model in which the feedback from cell size to these other processes occurs only through the size deviations, defined as the difference between the absolute size and the progression through the cell cycle. In a linear regime of this model, the dynamics reduce to a stochastically forced simple harmonic oscillator, yielding closed-form expressions for mother-daughter size correlations. We compare these to the higher order regression coefficients that measure the size memory over many generations. Our analysis reveals how the interplay between cell-cycle timing and intrinsic fluctuations shapes the apparent coarse-grained size control strategy, and in-particular, that coarse-grained correlations may not reflect the mechanistic feedback structure. We compare this model to a more commonly used approach where the coarse-grained dynamics are hard-coded into the model; hence, the first order autoregressive model for sizes is a perfect description of the size dynamics and therefore more accurately reflects the feedback structure.
In an emerging pandemic, policymakers need to make important decisions with limited information, for example choosing between a mitigation, suppression or elimination strategy. These strategies may require trade-offs to be made between the health impact of the pandemic and the economic costs of the interventions introduced in response. Mathematical models are a useful tool that can help understand the consequences of alternative policy options on the future dynamics and impact of the epidemic. Most models have focused on direct health impacts, neglecting the economic costs of control measures. Here, we introduce a model framework that captures both health and economic costs. We use this framework to compare the expected aggregate costs of mitigation, suppression and elimination strategies, across a range of different epidemiological and economic parameters. We find that for diseases with low severity, mitigation tends to be the most cost-effective option. For more severe diseases, suppression tends to be most cost effective if the basic reproduction number R0R_0 is relatively low, while elimination tends to be more cost-effective if R0R_0 is high. We use the example of New Zealand's elimination response to the Covid-19 pandemic in 2020 to anchor our framework to a real-world case study. We find that parameter estimates for Covid-19 in New Zealand put it close to or above the threshold at which elimination becomes more cost-effective than mitigation. We conclude that our proposed framework holds promise as a decision-support tool for future pandemic threats, although further work is needed to account for population heterogeneity and other factors relevant to decision-making.
Accurate prediction of protein-protein binding affinity is vital for understanding molecular interactions and designing therapeutics. We adapt Boltz-2, a state-of-the-art structure-based protein-ligand affinity predictor, for protein-protein affinity regression and evaluate it on two datasets, TCR3d and PPB-affinity. Despite high structural accuracy, Boltz-2-PPI underperforms relative to sequence-based alternatives in both small- and larger-scale data regimes. Combining embeddings from Boltz-2-PPI with sequence-based embeddings yields complementary improvements, particularly for weaker sequence models, suggesting different signals are learned by sequence- and structure-based models. Our results echo known biases associated with training with structural data and suggest that current structure-based representations are not primed for performant affinity prediction.
Personalized neoantigen vaccines represent a promising immunotherapy approach that harnesses tumor-specific antigens to stimulate anti-tumor immune responses. However, the design of these vaccines requires sophisticated computational workflows to predict and prioritize neoantigen candidates from patient sequencing data, coupled with rigorous review to ensure candidate quality. While numerous computational tools exist for neoantigen prediction, to our knowledge, there are no established protocols detailing the complete process from raw sequencing data through systematic candidate selection. Here, we present ImmunoNX (Immunogenomics Neoantigen eXplorer), an end-to-end protocol for neoantigen prediction and vaccine design that has supported over 185 patients across 11 clinical trials. The workflow integrates tumor DNA/RNA and matched normal DNA sequencing data through a computational pipeline built with Workflow Definition Language (WDL) and executed via Cromwell on Google Cloud Platform. ImmunoNX employs consensus-based variant calling, in-silico HLA typing, and pVACtools for neoantigen prediction. Additionally, we describe a two-stage immunogenomics review process with prioritization of neoantigen candidates, enabled by pVACview, followed by manual assessment of variants using the Integrative Genomics Viewer (IGV). This workflow enables vaccine design in under three months. We demonstrate the protocol using the HCC1395 breast cancer cell line dataset, identifying 78 high-confidence neoantigen candidates from 322 initial predictions. Although demonstrated here for vaccine development, this workflow can be adapted for diverse neoantigen therapies and experiments. Therefore, this protocol provides the research community with a reproducible, version-controlled framework for designing personalized neoantigen vaccines, supported by detailed documentation, example datasets, and open-source code.
Summary: We present needLR, a structural variant (SV) annotation tool that can be used for filtering and prioritization of candidate pathogenic SVs from long-read sequencing data using population allele frequencies, annotations for genomic context, and gene-phenotype associations. When using population data from 500 presumably healthy individuals to evaluate nine test cases with known pathogenic SVs, needLR assigned allele frequencies to over 97.5% of all detected SVs and reduced the average number of novel genic SVs to 121 per case while retaining all known pathogenic variants. Availability and Implementation: needLR is implemented in bash with dependencies including Truvari v4.2.2, BEDTools v2.31.1, and BCFtools v1.19. Source code, documentation, and pre-computed population allele frequency data are freely available at this https URL under an MIT license.
There are no more papers matching your filters at the moment.