Geisel School of Medicine at Dartmouth
Understanding the role of human behavior in shaping environmental outcomes is crucial for addressing global challenges such as climate change. Environmental systems are influenced not only by natural factors like temperature, but also by human decisions regarding mitigation efforts, which are often based on forecasts or predictions about future environmental conditions. Over time, different outcomes can emerge, including scenarios where the environment deteriorates despite efforts to mitigate, or where successful mitigation leads to environmental resilience. Additionally, fluctuations in the level of human participation in mitigation can occur, reflecting shifts in collective behavior. In this study, we consider a variety of human mitigation decisions, in addition to the feedback loop that is created by changes in human behavior because of environmental changes. While these outcomes are based on simplified models, they offer important insights into the dynamics of human decision-making and the factors that influence effective action in the context of environmental sustainability. This study aims to examine key social dynamics influencing society's response to a worsening climate. While others conclude that homophily prompts greater warming unconditionally, this model finds that homophily can prevent catastrophic effects given a poor initial environmental state. Assuming that poor countries have the resources to do so, a consensus in that class group to defect from the strategy of the rich group (who are generally incentivized to continue ``business as usual'') can frequently prevent the vegetation proportion from converging to 0.
Neural synchronization is central to cognition However, incomplete synchronization often produces chimera states where coherent and incoherent dynamics coexist. While previous studies have explored such patterns using networks of coupled oscillators, it remains unclear why neurons commit to communication or how chimera states persist. Here, we investigate the coevolution of neuronal phases and communication strategies on directed, weighted networks, where interaction payoffs depend on phase alignment and may be asymmetric due to unilateral communication. We find that both connection weights and directionality influence the stability of communicative strategies -- and, consequently, full synchronization -- as well as the strategic nature of neuronal interactions. Applying our framework to the C. elegans connectome, we show that emergent payoff structures, such as the snowdrift game, underpin the formation of chimera states. Our computational results demonstrate a promising neurogame-theoretic perspective, leveraging evolutionary graph theory to shed light on mechanisms of neuronal coordination beyond classical synchronization models.
Using picture description speech for dementia detection has been studied for 30 years. Despite the long history, previous models focus on identifying the differences in speech patterns between healthy subjects and patients with dementia but do not utilize the picture information directly. In this paper, we propose the first dementia detection models that take both the picture and the description texts as inputs and incorporate knowledge from large pre-trained image-text alignment models. We observe the difference between dementia and healthy samples in terms of the text's relevance to the picture and the focused area of the picture. We thus consider such a difference could be used to enhance dementia detection accuracy. Specifically, we use the text's relevance to the picture to rank and filter the sentences of the samples. We also identified focused areas of the picture as topics and categorized the sentences according to the focused areas. We propose three advanced models that pre-processed the samples based on their relevance to the picture, sub-image, and focused areas. The evaluation results show that our advanced models, with knowledge of the picture and large image-text alignment models, achieve state-of-the-art performance with the best detection accuracy at 83.44%, which is higher than the text-only baseline model at 79.91%. Lastly, we visualize the sample and picture results to explain the advantages of our models.
Researchers developed an annotation system leveraging Large Language Models (LLMs) to analyze technology acceptance from unstructured text, demonstrating its high consistency across multiple runs. The system achieved accuracy comparable to, and for some variables even surpassing, inter-expert agreement among human PhDs, providing a scalable and cost-effective approach to evaluate user attitudes towards technology.
The adequate use of information measured in a continuous manner along a period of time represents a methodological challenge. In the last decades, most of traditional statistical procedures have been extended for accommodating these functional data. The binary classification problem, which aims to correctly identify units as positive or negative based on marker values, is not aside of this scenario. The crucial point for making binary classifications based on a marker is to establish an order in the marker values, which is not immediate when these values are presented as functions. Here, we argue that if the marker is related to the characteristic under study, a trajectory from a positive participant should be more similar to trajectories from the positive population than to those drawn from the negative. With this criterion, a classification procedure based on the distance between the involved functions is proposed. Besides, we propose a fully non-parametric estimator for this so-called probability-based criterion, PBC. We explore its asymptotic properties, and its finite-sample behavior from an extensive Monte Carlo study. The observed results suggest that the proposed methodology works adequately, and frequently better than its competitors, for a wide variety of situations when the sample size in both the training and the testing cohorts is adequate. The practical use of the proposal is illustrated from real-world dataset. As online supplementary material, the manuscript includes a document with further simulations and additional comments. An R function which wraps up the implemented routines is also provided.
Feedback loops between population dynamics of individuals and their ecological environment are ubiquitously found in nature, and have shown profound effects on the resulting eco-evolutionary dynamics. Incorporating linear environmental feedback law into replicator dynamics of two-player games, recent theoretical studies shed light on understanding the oscillating dynamics of social dilemma. However, detailed effects of more general nonlinear feedback loops in multi-player games, which is more common especially in microbial systems, remain unclear. Here, we focus on ecological public goods games with environmental feedbacks driven by nonlinear selection gradient. Unlike previous models, multiple segments of stable and unstable equilibrium manifolds can emerge from the population dynamical systems. We find that a larger relative asymmetrical feedback speed for group interactions centered on cooperators not only accelerates the convergence of stable manifolds, but also increases the attraction basin of these stable manifolds. Furthermore, our work offers an innovative manifold control approach: by designing appropriate switching control laws, we are able to steer the eco-evolutionary dynamics to any desired population states. Our mathematical framework is an important generalization and complement to coevolutionary game dynamics, and also fills the theoretical gap in guiding the widespread problem of population state control in microbial experiments.
Colonoscopy screening effectively identifies and removes polyps before they progress to colorectal cancer (CRC), but current follow-up guidelines rely primarily on histopathological features, overlooking other important CRC risk factors. Variability in polyp characterization among pathologists also hinders consistent surveillance decisions. Advances in digital pathology and deep learning enable the integration of pathology slides and medical records for more accurate CRC risk prediction. Using data from the New Hampshire Colonoscopy Registry, including longitudinal follow-up, we adapted a transformer-based model for histopathology image analysis to predict 5-year CRC risk. We further explored multi-modal fusion strategies to combine clinical records with deep learning-derived image features. Training the model to predict intermediate clinical variables improved 5-year CRC risk prediction (AUC = 0.630) compared to direct prediction (AUC = 0.615, p = 0.013). Incorporating both imaging and non-imaging data, without requiring manual slide review, further improved performance (AUC = 0.674) compared to traditional features from colonoscopy and microscopy reports (AUC = 0.655, p = 0.001). These results highlight the value of integrating diverse data modalities with computational methods to enhance CRC risk stratification.
Objective: Currently, a major limitation for natural language processing (NLP) analyses in clinical applications is that a concept can be referenced in various forms across different texts. This paper introduces Multi-Ontology Refined Embeddings (MORE), a novel hybrid framework for incorporating domain knowledge from multiple ontologies into a distributional semantic model, learned from a corpus of clinical text. Materials and Methods: We use the RadCore and MIMIC-III free-text datasets for the corpus-based component of MORE. For the ontology-based part, we use the Medical Subject Headings (MeSH) ontology and three state-of-the-art ontology-based similarity measures. In our approach, we propose a new learning objective, modified from the Sigmoid cross-entropy objective function. Results and Discussion: We evaluate the quality of the generated word embeddings using two established datasets of semantic similarities among biomedical concept pairs. On the first dataset with 29 concept pairs, with the similarity scores established by physicians and medical coders, MORE's similarity scores have the highest combined correlation (0.633), which is 5.0% higher than that of the baseline model and 12.4% higher than that of the best ontology-based similarity measure.On the second dataset with 449 concept pairs, MORE's similarity scores have a correlation of 0.481, with the average of four medical residents' similarity ratings, and that outperforms the skip-gram model by 8.1% and the best ontology measure by 6.9%.
5
This paper investigates the evolution of strategic play where players drawn from a finite well-mixed population are offered the opportunity to play in a public goods game. All players accept the offer. However, due to the possibility of unforeseen circumstances, each player has a fixed probability of being unable to participate in the game, unlike similar models which assume voluntary participation. We first study how prescribed stochastic opting-out affects cooperation in finite populations. Moreover, in the model, cooperation is favored by natural selection over both neutral drift and defection if return on investment exceeds a threshold value defined solely by the population size, game size, and a player's probability of opting-out. Ultimately, increasing the probability that each player is unable to fulfill her promise of participating in the public goods game facilitates natural selection of cooperators. We also use adaptive dynamics to study the coevolution of cooperation and opting-out behavior. However, given rare mutations minutely different from the original population, an analysis based on adaptive dynamics suggests that the over time the population will tend towards complete defection and non-participation, and subsequently, from there, participating cooperators will stand a chance to emerge by neutral drift. Nevertheless, increasing the probability of non-participation decreases the rate at which the population tends towards defection when participating. Our work sheds light on understanding how stochastic opting-out emerges in the first place and its role in the evolution of cooperation.
Renal cell carcinoma (RCC) is the most common renal cancer in adults. The histopathologic classification of RCC is essential for diagnosis, prognosis, and management of patients. Reorganization and classification of complex histologic patterns of RCC on biopsy and surgical resection slides under a microscope remains a heavily specialized, error-prone, and time-consuming task for pathologists. In this study, we developed a deep neural network model that can accurately classify digitized surgical resection slides and biopsy slides into five related classes: clear cell RCC, papillary RCC, chromophobe RCC, renal oncocytoma, and normal. In addition to the whole-slide classification pipeline, we visualized the identified indicative regions and features on slides for classification by reprocessing patch-level classification results to ensure the explainability of our diagnostic model. We evaluated our model on independent test sets of 78 surgical resection whole slides and 79 biopsy slides from our tertiary medical institution, and 69 randomly selected surgical resection slides from The Cancer Genome Atlas (TCGA) database. The average area under the curve (AUC) of our classifier on the internal resection slides, internal biopsy slides, and external TCGA slides is 0.98, 0.98 and 0.99, respectively. Our results suggest that the high generalizability of our approach across different data sources and specimen types. More importantly, our model has the potential to assist pathologists by (1) automatically pre-screening slides to reduce false-negative cases, (2) highlighting regions of importance on digitized slides to accelerate diagnosis, and (3) providing objective and accurate diagnosis as the second opinion.
Background Brain tumours are the most common solid malignancies in children, encompassing diverse histological, molecular subtypes and imaging features and outcomes. Paediatric brain tumours (PBTs), including high- and low-grade gliomas (HGG, LGG), medulloblastomas (MB), ependymomas, and rarer forms, pose diagnostic and therapeutic challenges. Deep learning (DL)-based segmentation offers promising tools for tumour delineation, yet its performance across heterogeneous PBT subtypes and MRI protocols remains uncertain. Methods A retrospective single-centre cohort of 174 paediatric patients with HGG, LGG, medulloblastomas (MB), ependymomas, and other rarer subtypes was used. MRI sequences included T1, T1 post-contrast (T1-C), T2, and FLAIR. Manual annotations were provided for four tumour subregions: whole tumour (WT), T2-hyperintensity (T2H), enhancing tumour (ET), and cystic component (CC). A 3D nnU-Net model was trained and tested (121/53 split), with segmentation performance assessed using the Dice similarity coefficient (DSC) and compared against intra- and inter-rater variability. Results The model achieved robust performance for WT and T2H (mean DSC: 0.85), comparable to human annotator variability (mean DSC: 0.86). ET segmentation was moderately accurate (mean DSC: 0.75), while CC performance was poor. Segmentation accuracy varied by tumour type, MRI sequence combination, and location. Notably, T1, T1-C, and T2 alone produced results nearly equivalent to the full protocol. Conclusions DL is feasible for PBTs, particularly for T2H and WT. Challenges remain for ET and CC segmentation, highlighting the need for further refinement. These findings support the potential for protocol simplification and automation to enhance volumetric assessment and streamline paediatric neuro-oncology workflows.
The rise and spread of antibiotic resistance causes worsening medical cost and mortality especially for life-threatening bacteria infections, thereby posing a major threat to global health. Prescribing behavior of physicians is one of the important factors impacting the underlying dynamics of resistance evolution. It remains unclear when individual prescribing decisions can lead to the overuse of antibiotics on the population level, and whether population optimum of antibiotic use can be reached through an adaptive social learning process that governs the evolution of prescribing norm. Here we study a behavior-disease interaction model, specifically incorporating a feedback loop between prescription behavior and resistance evolution. We identify the conditions under which antibiotic resistance can evolve as a result of the tragedy of the commons in antibiotic overuse. Furthermore, we show that fast social learning that adjusts prescribing behavior in prompt response to resistance evolution can steer out cyclic oscillations of antibiotic usage quickly towards the stable population optimum of prescribing. Our work demonstrates that provision of prompt feedback to prescribing behavior with the collective consequences of treatment decisions and costs that are associated with resistance helps curb the overuse of antibiotics.
Mediation analyses play important roles in making causal inference in biomedical research to examine causal pathways that may be mediated by one or more intermediate variables (i.e., mediators). Although mediation frameworks have been well established such as counterfactual-outcomes (i.e., potential-outcomes) models and traditional linear mediation models, little effort has been devoted to dealing with mediators with zero-inflated structures due to challenges associated with excessive zeros. We develop a novel mediation modeling approach to address zero-inflated mediators containing true zeros and false zeros. The new approach can decompose the total mediation effect into two components induced by zero-inflated structures: the first component is attributable to the change in the mediator on its numerical scale which is a sum of two causal pathways and the second component is attributable only to its binary change from zero to a non-zero status. An extensive simulation study is conducted to assess the performance and it shows that the proposed approach outperforms existing standard causal mediation analysis approaches. We also showcase the application of the proposed approach to a real study in comparison with a standard causal mediation analysis approach.
Understanding how cooperation evolves in structured populations remains a fundamental question across diverse disciplines. The problem of cooperation typically involves pairwise or group interactions among individuals. While prior studies have extensively investigated the role of networks in shaping cooperative dynamics, the influence of tie or connection strengths between individuals has not been fully understood. Here, we introduce a quenched mean-field based framework for analyzing both pairwise and group dilemmas on any weighted network, providing interpretable conditions required for favoring cooperation. Our theoretical advances further motivate us to find that the degree-inverse weighted social ties -- reinforcing tie strengths between peripheral nodes while weakening those between hubs -- robustly promote cooperation in both pairwise and group dilemmas. Importantly, this configuration enables heterogeneous networks to outperform homogeneous ones in fixation of cooperation, thereby adding to the conventional view that degree heterogeneity inhibits cooperative behavior under the local stochastic strategy update. We further test the generality of degree-inverse weighted social ties in promoting cooperation on 30,000 random networks and 13 empirical networks drawn from real-world systems. Finally, we unveil the underlying mechanism by examining the formation and evolution of cooperative ties under social ties with degree-inverse weights. Our systematic analyses provide new insights into how the network adjustment of tie strengths can effectively steer structured populations toward cooperative outcomes in biological and social systems.
National surveys of the healthcare system in the United States were conducted to characterize the structure of healthcare system and investigate the impact of evidence-based innovations in healthcare systems on healthcare services. Administrative data is additionally available to researchers raising the question of whether inferences about healthcare organizations based on the survey data can be enhanced by incorporating information from auxiliary data. Administrative data can provide information for dealing with under-coverage-bias and non-response in surveys and for capturing more sub-populations. In this study, we focus on the use of administrative claims data to improve estimates about means of survey items for the finite population. Auxiliary information from the claims data is incorporated using multiple imputation to impute values of non-responding or non-surveyed organizations. We derive multiple versions of imputation strategy, and the logical development of methodology is compared to two incumbent approaches: a naïve analysis that ignores the sampling probabilities and a traditional survey analysis weighting by the inverses of the sampling probabilities. , and illustrate the methods using data from The National Survey of Healthcare Organizations and Systems and The Centers for Medicare & Medicaid Services Medicare claims data to make inferences about relationships of characteristics of healthcare organizations and healthcare services they provide.
Using time-to-event methods such as Cox proportional hazards models, it is well established that unmeasured heterogeneity in exposure or infection risk can lead to downward bias in point estimates of the per-contact vaccine efficacy (VE) in infectious disease trials. In this study, we explore an unreported source of bias-arising from temporally correlated exposure status-that is typically unmeasured and overlooked in standard analyses. Although this form of bias can plausibly affect a wide range of VE trials, it has received limited empirical attention. We develop a mathematical framework to characterize the mechanism of this bias and derive a closed-form approximation to quantify its magnitude without requiring direct measurement of exposure. Our findings show that, under realistic parameter settings, the resulting bias can be substantial. These results suggest that temporally correlated exposure should be recognized as a potentially important factor in the design and analysis of infectious disease vaccine trials.
Speech pause is an effective biomarker in dementia detection. Recent deep learning models have exploited speech pauses to achieve highly accurate dementia detection, but have not exploited the interpretability of speech pauses, i.e., what and how positions and lengths of speech pauses affect the result of dementia detection. In this paper, we will study the positions and lengths of dementia-sensitive pauses using adversarial learning approaches. Specifically, we first utilize an adversarial attack approach by adding the perturbation to the speech pauses of the testing samples, aiming to reduce the confidence levels of the detection model. Then, we apply an adversarial training approach to evaluate the impact of the perturbation in training samples on the detection model. We examine the interpretability from the perspectives of model accuracy, pause context, and pause length. We found that some pauses are more sensitive to dementia than other pauses from the model's perspective, e.g., speech pauses near to the verb "is". Increasing lengths of sensitive pauses or adding sensitive pauses leads the model inference to Alzheimer's Disease, while decreasing the lengths of sensitive pauses or deleting sensitive pauses leads to non-AD.
In this paper, we analyze Twitter signals as a medium for user sentiment to predict the price fluctuations of a small-cap alternative cryptocurrency called \emph{ZClassic}. We extracted tweets on an hourly basis for a period of 3.5 weeks, classifying each tweet as positive, neutral, or negative. We then compiled these tweets into an hourly sentiment index, creating an unweighted and weighted index, with the latter giving larger weight to retweets. These two indices, alongside the raw summations of positive, negative, and neutral sentiment were juxtaposed to 400\sim 400 data points of hourly pricing data to train an Extreme Gradient Boosting Regression Tree Model. Price predictions produced from this model were compared to historical price data, with the resulting predictions having a 0.81 correlation with the testing data. Our model's predictive data yielded statistical significance at the p < 0.0001 level. Our model is the first academic proof of concept that social media platforms such as Twitter can serve as powerful social signals for predicting price movements in the highly speculative alternative cryptocurrency, or "alt-coin", market.
We performed large, lab-in-the-field experiment (2,591 participants across 134 Honduran villages; ten rounds) and tracked how contribution behavior unfolds in fixed, anonymous groups of size five. Contribution separates early into two durable paths, one low and one high, with rare convergence thereafter. High-path players can be identified with strong accuracy early on. Groups that begin with an early majority of above-norm contributors (about 60%) are very likely finish high. The empirical finding of a bifurcation, consistent with the theory, shows that early, high contributions by socially central people steer groups onto, and help keep them on, a high-cooperation path.
Social distancing as one of the main non-pharmaceutical interventions can help slow down the spread of diseases, like in the COVID-19 pandemic. Effective social distancing, unless enforced as drastic lockdowns and mandatory cordon sanitaire, requires consistent strict collective adherence. However, it remains unknown what the determinants for the resultant compliance of social distancing and their impact on disease mitigation are. Here, we incorporate into the epidemiological process with an evolutionary game theory model that governs the evolution of social distancing behavior. In our model, we assume an individual acts in their best interest and their decisions are driven by adaptive social learning of the real-time risk of infection in comparison with the cost of social distancing. We find interesting oscillatory dynamics of social distancing accompanied with waves of infection. Moreover, the oscillatory dynamics are dampened with a nontrivial dependence on model parameters governing decision-makings and gradually cease when the cumulative infections exceed the herd immunity. Compared to the scenario without social distancing, we quantify the degree to which social distancing mitigates the epidemic and its dependence on individuals' responsiveness and rationality in their behavior changes. Our work offers new insights into leveraging human behavior in support of pandemic response.
There are no more papers matching your filters at the moment.