Toursun Synbio
The parallels between protein sequences and natural language in their sequential structures have inspired the application of large language models (LLMs) to protein understanding. Despite the success of LLMs in NLP, their effectiveness in comprehending protein sequences remains an open question, largely due to the absence of datasets linking protein sequences to descriptive text. Researchers have then attempted to adapt LLMs for protein understanding by integrating a protein sequence encoder with a pre-trained LLM. However, this adaptation raises a fundamental question: "Can LLMs, originally designed for NLP, effectively comprehend protein sequences as a form of language?" Current datasets fall short in addressing this question due to the lack of a direct correlation between protein sequences and corresponding text descriptions, limiting the ability to train and evaluate LLMs for protein understanding effectively. To bridge this gap, we introduce ProteinLMDataset, a dataset specifically designed for further self-supervised pretraining and supervised fine-tuning (SFT) of LLMs to enhance their capability for protein sequence comprehension. Specifically, ProteinLMDataset includes 17.46 billion tokens for pretraining and 893,000 instructions for SFT. Additionally, we present ProteinLMBench, the first benchmark dataset consisting of 944 manually verified multiple-choice questions for assessing the protein understanding capabilities of LLMs. ProteinLMBench incorporates protein-related details and sequences in multiple languages, establishing a new standard for evaluating LLMs' abilities in protein comprehension. The large language model InternLM2-7B, pretrained and fine-tuned on the ProteinLMDataset, outperforms GPT-4 on ProteinLMBench, achieving the highest accuracy score.
Protein engineering is important for biomedical applications, but conventional approaches are often inefficient and resource-intensive. While deep learning (DL) models have shown promise, their training or implementation into protein engineering remains challenging for biologists without specialized computational expertise. To address this gap, we propose AutoProteinEngine (AutoPE), an agent framework that leverages large language models (LLMs) for multimodal automated machine learning (AutoML) for protein engineering. AutoPE innovatively allows biologists without DL backgrounds to interact with DL models using natural language, lowering the entry barrier for protein engineering tasks. Our AutoPE uniquely integrates LLMs with AutoML to handle model selection for both protein sequence and graph modalities, automatic hyperparameter optimization, and automated data retrieval from protein databases. We evaluated AutoPE through two real-world protein engineering tasks, demonstrating substantial performance improvements compared to traditional zero-shot and manual fine-tuning approaches. By bridging the gap between DL and biologists' domain expertise, AutoPE empowers researchers to leverage DL without extensive programming knowledge. Our code is available at this https URL.
R-Search is a framework that enables Large Language Models to answer time-sensitive queries by unifying reasoning, planning, execution, and synthesis within a single forward pass, leveraging a Natural-Language Directed Acyclic Graph for multi-source search. It demonstrates improved accuracy (e.g., 78.13% on FinSearchBench-24) and efficiency (e.g., 70% token reduction) compared to prior search-augmented LLM methods.
The structural similarities between protein sequences and natural languages have led to parallel advancements in deep learning across both domains. While large language models (LLMs) have achieved much progress in the domain of natural language processing, their potential in protein engineering remains largely unexplored. Previous approaches have equipped LLMs with protein understanding capabilities by incorporating external protein encoders, but this fails to fully leverage the inherent similarities between protein sequences and natural languages, resulting in sub-optimal performance and increased model complexity. To address this gap, we present TourSynbio-7B, the first multi-modal large model specifically designed for protein engineering tasks without external protein encoders. TourSynbio-7B demonstrates that LLMs can inherently learn to understand proteins as language. The model is post-trained and instruction fine-tuned on InternLM2-7B using ProteinLMDataset, a dataset comprising 17.46 billion tokens of text and protein sequence for self-supervised pretraining and 893K instructions for supervised fine-tuning. TourSynbio-7B outperforms GPT-4 on the ProteinLMBench, a benchmark of 944 manually verified multiple-choice questions, with 62.18% accuracy. Leveraging TourSynbio-7B's enhanced protein sequence understanding capability, we introduce TourSynbio-Agent, an innovative framework capable of performing various protein engineering tasks, including mutation analysis, inverse folding, protein folding, and visualization. TourSynbio-Agent integrates previously disconnected deep learning models in the protein engineering domain, offering a unified conversational user interface for improved usability. Finally, we demonstrate the efficacy of TourSynbio-7B and TourSynbio-Agent through two wet lab case studies on vanilla key enzyme modification and steroid compound catalysis.
Recent advancements in Large Language Models (LLMs) have enhanced efficiency across various domains, including protein engineering, where they offer promising opportunities for dry lab and wet lab experiment workflow automation. Previous work, namely TourSynbio-Agent, integrates a protein-specialized multimodal LLM (i.e. TourSynbio-7B) with domain-specific deep learning (DL) models to streamline both computational and experimental protein engineering tasks. While initial validation demonstrated TourSynbio-7B's fundamental protein property understanding, the practical effectiveness of the complete TourSynbio-Agent framework in real-world applications remained unexplored. This study presents a comprehensive validation of TourSynbio-Agent through five diverse case studies spanning both computational (dry lab) and experimental (wet lab) protein engineering. In three computational case studies, we evaluate the TourSynbio-Agent's capabilities in mutation prediction, protein folding, and protein design. Additionally, two wet-lab validations demonstrate TourSynbio-Agent's practical utility: engineering P450 proteins with up to 70% improved selectivity for steroid 19-hydroxylation, and developing reductases with 3.7x enhanced catalytic efficiency for alcohol conversion. Our findings from the five case studies establish that TourSynbio-Agent can effectively automate complex protein engineering workflows through an intuitive conversational interface, potentially accelerating scientific discovery in protein engineering.
Protein inverse folding aims to identify viable amino acid sequences that can fold into given protein structures, enabling the design of novel proteins with desired functions for applications in drug discovery, enzyme engineering, and biomaterial development. Diffusion probabilistic models have emerged as a promising approach in inverse folding, offering both feasible and diverse solutions compared to traditional energy-based methods and more recent protein language models. However, existing diffusion models for protein inverse folding operate in discrete data spaces, necessitating prior distributions for transition matrices and limiting smooth transitions and gradients inherent to continuous spaces, leading to suboptimal performance. Drawing inspiration from the success of diffusion models in continuous domains, we introduce the Latent Graph Diffusion Model for Protein Inverse Folding (LaGDif). LaGDif bridges discrete and continuous realms through an encoder-decoder architecture, transforming protein graph data distributions into random noise within a continuous latent space. Our model then reconstructs protein sequences by considering spatial configurations, biochemical attributes, and environmental factors of each node. Additionally, we propose a novel inverse folding self-ensemble method that stabilizes prediction results and further enhances performance by aggregating multiple denoised output protein sequence. Empirical results on the CATH dataset demonstrate that LaGDif outperforms existing state-of-the-art techniques, achieving up to 45.55% improvement in sequence recovery rate for single-chain proteins and maintaining an average RMSD of 1.96 Å between generated and native structures. The code is public available at this https URL.
The exponential growth in protein-related databases and scientific literature, combined with increasing demands for efficient biological information retrieval, has created an urgent need for unified and accessible search methods in protein engineering research. We present TourSynbio-Search, a novel bioinformatics search agent framework powered by the TourSynbio-7B protein multimodal large language model (LLM), designed to address the growing challenges of information retrieval across rapidly expanding protein databases and corresponding online research literature. The agent's dual-module architecture consists of PaperSearch and ProteinSearch components, enabling comprehensive exploration of both scientific literature and protein data across multiple biological databases. At its core, TourSynbio-Search employs an intelligent agent system that interprets natural language queries, optimizes search parameters, and executes search operations across major platforms including UniProt, PDB, ArXiv, and BioRxiv. The agent's ability to process intuitive natural language queries reduces technical barriers, allowing researchers to efficiently access and analyze complex biological data without requiring extensive bioinformatics expertise. Through detailed case studies in literature retrieval and protein structure visualization, we demonstrate TourSynbio-Search's effectiveness in streamlining biological information retrieval and enhancing research productivity. This framework represents an advancement in bridging the accessibility gap between complex biological databases and researchers, potentially accelerating progress in protein engineering applications. Our codes are available at: this https URL
Enzymes are biological catalysts that can accelerate chemical reactions compared to uncatalyzed reactions in aqueous environments. Their catalytic efficiency is quantified by the turnover number (kcat), a parameter in enzyme kinetics. Enhancing enzyme activity is important for optimizing slow chemical reactions, with far-reaching implications for both research and industrial applications. However, traditional wet-lab methods for measuring and optimizing enzyme activity are often resource-intensive and time-consuming. To address these limitations, we introduce kcatDiffuser, a novel regressor-guided diffusion model designed to predict and improve enzyme turnover numbers. Our approach innovatively reformulates enzyme mutation prediction as a protein inverse folding task, thereby establishing a direct link between structural prediction and functional optimization. kcatDiffuser is a graph diffusion model guided by a regressor, enabling the prediction of amino acid mutations at multiple random positions simultaneously. Evaluations on BERENDA dataset shows that kcatDiffuser can achieve a {\Delta} log kcat of 0.209, outperforming state-of-the-art methods like ProteinMPNN, PiFold, GraDe-IF in improving enzyme turnover numbers. Additionally, kcatDiffuser maintains high structural fidelity with a recovery rate of 0.716, pLDDT score of 92.515, RMSD of 3.764, and TM-score of 0.934, demonstrating its ability to generate enzyme variants with enhanced activity while preserving essential structural properties. Overall, kcatDiffuser represents a more efficient and targeted approach to enhancing enzyme activity. The code is available at this https URL.
Enzymes are biological catalysts that can accelerate chemical reactions compared to uncatalyzed reactions in aqueous environments. Their catalytic efficiency is quantified by the turnover number (kcat), a parameter in enzyme kinetics. Enhancing enzyme activity is important for optimizing slow chemical reactions, with far-reaching implications for both research and industrial applications. However, traditional wet-lab methods for measuring and optimizing enzyme activity are often resource-intensive and time-consuming. To address these limitations, we introduce kcatDiffuser, a novel regressor-guided diffusion model designed to predict and improve enzyme turnover numbers. Our approach innovatively reformulates enzyme mutation prediction as a protein inverse folding task, thereby establishing a direct link between structural prediction and functional optimization. kcatDiffuser is a graph diffusion model guided by a regressor, enabling the prediction of amino acid mutations at multiple random positions simultaneously. Evaluations on BERENDA dataset shows that kcatDiffuser can achieve a {\Delta} log kcat of 0.209, outperforming state-of-the-art methods like ProteinMPNN, PiFold, GraDe-IF in improving enzyme turnover numbers. Additionally, kcatDiffuser maintains high structural fidelity with a recovery rate of 0.716, pLDDT score of 92.515, RMSD of 3.764, and TM-score of 0.934, demonstrating its ability to generate enzyme variants with enhanced activity while preserving essential structural properties. Overall, kcatDiffuser represents a more efficient and targeted approach to enhancing enzyme activity. The code is available at this https URL.
There are no more papers matching your filters at the moment.