Florida State University
This survey provides a comprehensive analysis of model extraction attacks and defenses for Large Language Models, establishing a detailed taxonomy for both attack methodologies and protection strategies. It systematically examines how LLMs are vulnerable to functionality, data, and prompt extraction, and analyzes various defense mechanisms while proposing a specialized evaluation framework for the field.
· +1
Advances in artificial intelligence (AI) are fueling a new paradigm of discoveries in natural sciences. Today, AI has started to advance natural sciences by improving, accelerating, and enabling our understanding of natural phenomena at a wide range of spatial and temporal scales, giving rise to a new area of research known as AI for science (AI4Science). Being an emerging research paradigm, AI4Science is unique in that it is an enormous and highly interdisciplinary area. Thus, a unified and technical treatment of this field is needed yet challenging. This work aims to provide a technically thorough account of a subarea of AI4Science; namely, AI for quantum, atomistic, and continuum systems. These areas aim at understanding the physical world from the subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales and form an important subarea of AI4Science. A unique advantage of focusing on these areas is that they largely share a common set of challenges, thereby allowing a unified and foundational treatment. A key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods. We provide an in-depth yet intuitive account of techniques to achieve equivariance to symmetry transformations. We also discuss other common technical challenges, including explainability, out-of-distribution generalization, knowledge transfer with foundation and large language models, and uncertainty quantification. To facilitate learning and education, we provide categorized lists of resources that we found to be useful. We strive to be thorough and unified and hope this initial effort may trigger more community interests and efforts to further advance AI4Science.
We propose an ensemble score filter (EnSF) for solving high-dimensional nonlinear filtering problems with superior accuracy. A major drawback of existing filtering methods, e.g., particle filters or ensemble Kalman filters, is the low accuracy in handling high-dimensional and highly nonlinear problems. EnSF attacks this challenge by exploiting the score-based diffusion model, defined in a pseudo-temporal domain, to characterizing the evolution of the filtering density. EnSF stores the information of the recursively updated filtering density function in the score function, instead of storing the information in a set of finite Monte Carlo samples (used in particle filters and ensemble Kalman filters). Unlike existing diffusion models that train neural networks to approximate the score function, we develop a training-free score estimation that uses a mini-batch-based Monte Carlo estimator to directly approximate the score function at any pseudo-spatial-temporal location, which provides sufficient accuracy in solving high-dimensional nonlinear problems as well as saves a tremendous amount of time spent on training neural networks. High-dimensional Lorenz-96 systems are used to demonstrate the performance of our method. EnSF provides surprising performance, compared with the state-of-the-art Local Ensemble Transform Kalman Filter method, in reliably and efficiently tracking extremely high-dimensional Lorenz systems (up to 1,000,000 dimensions) with highly nonlinear observation processes.
9
ToMoE is a method for converting dense Large Language Models into efficient Mixture-of-Experts (MoE) architectures by identifying and selectively activating existing parameters through dynamic structural pruning, rather than requiring retraining. It consistently outperformed other structural pruning methods on various LLMs (Phi-2, LLaMA-2/3, Qwen-2.5) without additional fine-tuning.
1
Researchers from the University of Connecticut and collaborators developed TreeDiff, a syntax-aware diffusion framework that leverages Abstract Syntax Trees for guided masking in code generation. This approach yields improved syntactic correctness and reconstruction accuracy on benchmarks like HumanEval and MBPP by respecting the structural integrity of programming languages.
Rhombohedral multilayer graphene has recently emerged as a rich platform for studying correlation driven magnetic, topological and superconducting states. While most experimental efforts have focused on devices with N9\leq 9 layers, the electronic structure of thick rhombohedral graphene features flat-band surface states even in the infinite layer limit. Here, we use layer resolved capacitance measurements to directly detect these surface states for N13N\approx 13 layer rhombohedral graphene devices. Using electronic transport and local magnetometry, we find that the surface states host a variety of ferromagnetic phases, including both valley imbalanced quarter metals and broad regimes of density in which the system spontaneously spin polarizes. We observe several superconducting states localized to a single surface state. These superconductors appear on the unpolarized side of the density-tuned spin transitions, and show strong violations of the Pauli limit consistent with a dominant attractive interaction in the spin-triplet, valley-singlet pairing channel. In contrast to previous studies of rhombohedral multilayers, however, we find that superconductivity can persist to zero displacement field where the system is inversion symmetric. Energetic considerations suggest that superconductivity in this regime is described by the existence of two independent surface superconductors coupled via tunneling through the insulating single crystal graphite bulk.
Recent normalization-based methods have shown great success in tackling the distribution shift issue, facilitating non-stationary time series forecasting. Since these methods operate in the time domain, they may fail to fully capture the dynamic patterns that are more apparent in the frequency domain, leading to suboptimal results. This paper first theoretically analyzes how normalization methods affect frequency components. We prove that the current normalization methods that operate in the time domain uniformly scale non-zero frequencies, and thus, they struggle to determine components that contribute to more robust forecasting. Therefore, we propose FredNormer, which observes datasets from a frequency perspective and adaptively up-weights the key frequency components. To this end, FredNormer consists of two components: a statistical metric that normalizes the input samples based on their frequency stability and a learnable weighting layer that adjusts stability and introduces sample-specific variations. Notably, FredNormer is a plug-and-play module, which does not compromise the efficiency compared to existing normalization methods. Extensive experiments show that FredNormer improves the averaged MSE of backbone forecasting models by 33.3% and 55.3% on the ETTm2 dataset. Compared to the baseline normalization methods, FredNormer achieves 18 top-1 results and 6 top-2 results out of 28 settings.
Standard first-order Langevin algorithms such as the unadjusted Langevin algorithm (ULA) are obtained by discretizing the Langevin diffusion and are widely used for sampling in machine learning because they scale to high dimensions and large datasets. However, they face two key limitations: (i) they require differentiable log-densities, excluding targets with non-differentiable components; and (ii) they generally fail to sample heavy-tailed targets. We propose anchored Langevin dynamics, a unified approach that accommodates non-differentiable targets and certain classes of heavy-tailed distributions. The method replaces the original potential with a smooth reference potential and modifies the Langevin diffusion via multiplicative scaling. We establish non-asymptotic guarantees in the 2-Wasserstein distance to the target distribution and provide an equivalent formulation derived via a random time change of the Langevin diffusion. We provide numerical experiments to illustrate the theory and practical performance of our proposed approach.
ETH Zurich logoETH ZurichUniversity of Washington logoUniversity of WashingtonCNRS logoCNRSUniversity of Pittsburgh logoUniversity of PittsburghUniversity of Cambridge logoUniversity of CambridgeUniversity of FreiburgHeidelberg UniversityLeibniz University HannoverNortheastern University logoNortheastern UniversityUCLA logoUCLAImperial College London logoImperial College LondonUniversity of Manchester logoUniversity of ManchesterUniversity of ZurichNew York University logoNew York UniversityUniversity of BernUniversity of StuttgartUC Berkeley logoUC BerkeleyUniversity College London logoUniversity College LondonFudan University logoFudan UniversityGeorgia Institute of Technology logoGeorgia Institute of TechnologyNational Taiwan Universitythe University of Tokyo logothe University of TokyoUniversity of California, Irvine logoUniversity of California, IrvineUniversity of BonnTechnical University of BerlinUniversity of Bristol logoUniversity of BristolUniversity of Michigan logoUniversity of MichiganUniversity of EdinburghUniversity of Hong KongUniversity of Alabama at BirminghamNorthwestern University logoNorthwestern UniversityUniversity of BambergUniversity of Florida logoUniversity of FloridaEmory University logoEmory UniversityUniversity of CologneHarvard Medical SchoolUniversity of Pennsylvania logoUniversity of PennsylvaniaUniversity of Southampton logoUniversity of SouthamptonFlorida State UniversityEPFL logoEPFLUniversity of Wisconsin-Madison logoUniversity of Wisconsin-MadisonMassachusetts General HospitalChongqing UniversityKeio UniversityUniversity of Alberta logoUniversity of AlbertaKing’s College London logoKing’s College LondonFriedrich-Alexander-Universität Erlangen-NürnbergUniversity of LuxembourgTechnical University of Munich logoTechnical University of MunichUniversity of Duisburg-EssenSapienza University of RomeUniversity of HeidelbergUniversity of SheffieldHKUST logoHKUSTUniversity of GenevaWashington University in St. LouisTU BerlinUniversity of GlasgowUniversity of SiegenUniversity of PotsdamUniversidade Estadual de CampinasUniversity of OldenburgThe Ohio State University logoThe Ohio State UniversityUniversity of LeicesterGerman Cancer Research Center (DKFZ)University of BremenUniversity of ToulouseUniversity of MiamiKarlsruhe Institute of Technology logoKarlsruhe Institute of TechnologyPeking Union Medical CollegeUniversity of OuluUniversity of HamburgUniversity of RegensburgUniversity of BirminghamUniversity of LeedsChinese Academy of Medical SciencesINSERMUniversity of Basel logoUniversity of BaselPeking Union Medical College HospitalUniversity of LausanneUniversity of LilleUniversity of PoitiersUniversity of PassauUniversity of LübeckKing Fahd University of Petroleum and MineralsUniversity of LondonUniversity of NottinghamUniversity of Erlangen-NurembergUniversity of BielefeldSorbonne UniversityUniversity of South FloridaWake Forest UniversityUniversity of CalgaryUniversity of Picardie Jules VerneIBMUniversity of Göttingen logoUniversity of GöttingenUniversity of BordeauxUniversity of MannheimUniversity of California San FranciscoNIHUniversity of KonstanzUniversity of Electro-CommunicationsUniversity of WuppertalUniversity of ReunionUNICAMPUniversity of TrierHasso Plattner InstituteUniversity of BayreuthHeidelberg University HospitalUniversity of StrasbourgDKFZUniversity of LorraineInselspital, Bern University Hospital, University of BernUniversity of WürzburgUniversity of La RochelleUniversity of LyonUniversity of HohenheimUniversity Medical Center Hamburg-EppendorfUniversity of UlmUniversity Hospital ZurichUniversity of TuebingenUniversity of KaiserslauternUniversity of NantesUniversity of MainzUniversity of PaderbornUniversity of KielMedical University of South CarolinaUniversity of RostockThe University of Texas MD Anderson Cancer CenterNational Research Council (CNR)Hannover Medical SchoolItalian National Research CouncilUniversity of MuensterUniversity of MontpellierUniversity of LeipzigUniversity of GreifswaldUniversity Hospital BernSiemens HealthineersThe University of Alabama at BirminghamNational Institutes of HealthUniversity of MarburgUniversity of Paris-SaclayUniversity of LimogesUniversity of Clermont AuvergneUniversity of DortmundUniversity of GiessenKITUniversity of ToulonChildren’s Hospital of PhiladelphiaUniversity of JenaNational Taiwan University HospitalUniversity of SaarlandUniversity of ErlangenNational Cancer InstituteUniversity Hospital HeidelbergSwiss Federal Institute of Technology LausanneUniversity of Texas Health Science Center at HoustonNational Institute of Biomedical Imaging and BioengineeringUniversity of New CaledoniaUniversity of Koblenz-LandauParis Diderot UniversityUniversity of ParisInselspital, Bern University HospitalUniversity of Grenoble AlpesUniversity Hospital BaselMD Anderson Cancer CenterUniversity of AngersUniversity of French PolynesiaUniversity of MagdeburgUniversity of Geneva, SwitzerlandOulu University HospitalUniversity of ToursFriedrich-Alexander-University Erlangen-NurnbergUniversity of Rennes 1Wake Forest School of MedicineNIH Clinical CenterParis Descartes UniversityUniversity of Rouen NormandieUniversity of Aix-MarseilleUniversity of Perpignan Via DomitiaUniversity of Caen NormandieUniversity of FrankfurtUniversity of BochumUniversity of Bourgogne-Franche-ComtéUniversity of Corsica Pasquale PaoliNational Institute of Neurological Disorders and StrokeUniversity of HannoverRoche DiagnosticsUniversity of South BrittanyUniversity of DüsseldorfUniversity of Reims Champagne-ArdenneUniversity of HalleIRCCS Fondazione Santa LuciaUniversity of Applied Sciences TrierUniversity of Southampton, UKUniversity of Nice–Sophia AntipolisUniversit de LorraineUniversité Paris-Saclay["École Polytechnique Fédérale de Lausanne"]RWTH Aachen UniversityUniversity of Bern, Institute for Advanced Study in Biomedical InnovationCRIBIS University of AlbertaThe Cancer Imaging Archive (TCIA)Fraunhofer Institute for Medical Image Computing MEVISMedical School of HannoverIstituto di Ricovero e Cura a Carattere Scientifico NeuromedFondazione Santa Lucia IRCCSCEA, LIST, Laboratory of Image and Biomedical SystemsUniversity of Alberta, CanadaHeidelberg University Hospital, Department of NeuroradiologyUniversity of Bern, SwitzerlandUniversity of DresdenUniversity of SpeyerUniversity of Trier, GermanyUniversity of Lorraine, FranceUniversity of Le Havre NormandieUniversity of Bretagne OccidentaleUniversity of French GuianaUniversity of the AntillesUniversity of Bern, Institute of Surgical Technology and BiomechanicsUniversity of Bern, ARTORG Center for Biomedical Engineering ResearchUniversity of Geneva, Department of RadiologyUniversity of Zürich, Department of NeuroradiologyRuhr-University-Bochum
·
Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.
Researchers from Drexel and Florida State University developed a hierarchical knowledge injection framework that systematically augments Large Language Models (LLMs) with multi-level context for automated program repair. This approach achieved a 79% bug fix rate with Llama 3.3 on the BugsInPy dataset, marking a 23% improvement over previous methods, and showed that incremental context provision outperforms a monolithic 'all-at-once' injection strategy.
Scientific English is currently undergoing rapid change, with words like "delve," "intricate," and "underscore" appearing far more frequently than just a few years ago. It is widely assumed that scientists' use of large language models (LLMs) is responsible for such trends. We develop a formal, transferable method to characterize these linguistic changes. Application of our method yields 21 focal words whose increased occurrence in scientific abstracts is likely the result of LLM usage. We then pose "the puzzle of lexical overrepresentation": WHY are such words overused by LLMs? We fail to find evidence that lexical overrepresentation is caused by model architecture, algorithm choices, or training data. To assess whether reinforcement learning from human feedback (RLHF) contributes to the overuse of focal words, we undertake comparative model testing and conduct an exploratory online study. While the model testing is consistent with RLHF playing a role, our experimental results suggest that participants may be reacting differently to "delve" than to other focal words. With LLMs quickly becoming a driver of global language change, investigating these potential sources of lexical overrepresentation is important. We note that while insights into the workings of LLMs are within reach, a lack of transparency surrounding model development remains an obstacle to such research.
This research models the peculiar Type Ic supernova SN2022jli, demonstrating that a supernova-induced binary interaction, where a newly formed neutron star orbits within the inflated envelope of a companion star, can explain its sustained ~12.5-day periodic undulations and luminosity range of 10^42 – 10^43 erg s^-1. The study highlights the critical role of bipolar kinetic feedback from super-Eddington accretion and a high orbital eccentricity (0.8 "," e "," 0.9) to match observations.
Fractionally filled Chern bands with strong interactions may give rise to fractional Chern insulator (FCI) states, the zero-field analogue of the fractional quantum Hall effect. Recent experiments have demonstrated the existence of FCIs in twisted bilayer MoTe2_2 without external magnetic fields -- most robust at ν=2/3\nu=-2/3 -- as well as Chern insulators (CIs) at ν=1\nu=-1. Although the appearance of both of these states is theoretically natural in an interacting topological system, experiments repeatedly observe nonmagnetic states (lacking FCIs) at ν=1/3\nu=-1/3 and 4/3-4/3, a puzzling result which has not been fully theoretically explained. In this work, we perform Hartree-Fock and exact diagonalization calculations to test whether the standard MoTe2_2 moiré model with the (greatly varying) parameter values available in the literature can reproduce the non-magnetic states at ν=1/3\nu=-1/3 and 4/3-4/3 in unison with the FCI at ν=2/3\nu=-2/3 and CI state at ν=1\nu = -1. We focus on the experimentally relevant twist angles and, crucially, include remote bands. We find that the parameters proposed in [Wang et al. (2023)] can nearly capture the experimental phenomena at ν=1/3,2/3,1,4/3\nu=-1/3,-2/3,-1,-4/3 simultaneously, though the predicted ground states at ν=1/3\nu=-1/3 are still mostly fully-spin-polarized and a larger dielectric constant ϵ>10\epsilon>10 than is typical of hexagonal boron nitride (h-BN) substrate ϵ6\epsilon\sim 6 is required. Our results show the importance of remote bands in identifying the competing magnetic orders and lay the groundwork for further study of the realistic phase diagram.
LLM-as-a-Judge has emerged as a promising tool for automatically evaluating generated outputs, but its reliability is often undermined by potential biases in judgment. Existing efforts to mitigate these biases face key limitations: in-context learning-based methods fail to address rooted biases due to the evaluator's limited capacity for self-reflection, whereas fine-tuning is not applicable to all evaluator types, especially closed-source models. To address this challenge, we introduce the Reasoning-based Bias Detector (RBD), which is a plug-in module that identifies biased evaluations and generates structured reasoning to guide evaluator self-correction. Rather than modifying the evaluator itself, RBD operates externally and engages in an iterative process of bias detection and feedback-driven revision. To support its development, we design a complete pipeline consisting of biased dataset construction, supervision collection, distilled reasoning-based fine-tuning of RBD, and integration with LLM evaluators. We fine-tune four sizes of RBD models, ranging from 1.5B to 14B, and observe consistent performance improvements across all scales. Experimental results on 4 bias types--verbosity, position, bandwagon, and sentiment--evaluated using 8 LLM evaluators demonstrate RBD's strong effectiveness. For example, the RBD-8B model improves evaluation accuracy by an average of 18.5% and consistency by 10.9%, and surpasses prompting-based baselines and fine-tuned judges by 12.8% and 17.2%, respectively. These results highlight RBD's effectiveness and scalability. Additional experiments further demonstrate its strong generalization across biases and domains, as well as its efficiency.
In universal fault-tolerant quantum computing, implementing logical non-Clifford gates often demands substantial spacetime resources for many error-correcting codes, including the high-threshold surface code. A critical mission for realizing large-scale quantum computing is to develop simple and resource-efficient implementations of logical non-Clifford gates. We propose a novel way of implementing non-Clifford operations in the standard surface code based on hybrid lattice surgery. First we generalize the standard lattice surgery to hybrid lattice surgery, where operations of rough merge and rough split happen across different topological codes. Then we apply such procedures between Abelian and non-Abelian codes and show that this can provide non-Clifford operations in the standard surface code, in the form of a magic state or a non-Clifford gate teleportation. Complementing this, we provide a continuum topological field theory description of this hybrid lattice surgery utilizing interfaces between (2+1)d topological orders. From these considerations, we can generalize our protocol to non-Clifford gates and magic states at all finite levels of the Clifford hierarchy, as well as gates beyond the hierarchy. We also discuss protocols extending this framework to qutrits.
This Letter reports measurements of muon-neutrino disappearance and electron-neutrino appearance and the corresponding antineutrino processes between the two NOvA detectors in the NuMI neutrino beam. These measurements use a dataset with double the neutrino mode beam exposure that was previously analyzed, along with improved simulation and analysis techniques. A joint fit to these samples in the three-flavor paradigm results in the most precise single-experiment constraint on the atmospheric neutrino mass-splitting, Δm322=2.4310.034+0.036(2.4790.036+0.036)×103\Delta m^2_{32}= 2.431^{+0.036}_{-0.034} (-2.479^{+0.036}_{-0.036}) \times 10^{-3}~eV2^2 if the mass ordering is Normal (Inverted). In both orderings, a region close to maximal mixing with sin2θ23=0.550.02+0.06\sin^2\theta_{23}=0.55^{+0.06}_{-0.02} is preferred. The NOvA data show a mild preference for the Normal mass ordering with a Bayes factor of 2.4 (corresponding to 70\% of the posterior probability), indicating that the Normal ordering is 2.4 times more probable than the Inverted ordering. When incorporating a 2D Δm322sin22θ13\Delta m^2_{32}\textrm{--}\sin^2 2\theta_{13} constraint based on Daya Bay data, this preference strengthens to a Bayes factor of 6.6 (87\%).
As powerful Large Language Models (LLMs) are now widely used for numerous practical applications, their safety is of critical importance. While alignment techniques have significantly improved overall safety, LLMs remain vulnerable to carefully crafted adversarial inputs. Consequently, adversarial attack methods are extensively used to study and understand these vulnerabilities. However, current attack methods face significant limitations. Those relying on optimizing discrete tokens suffer from limited efficiency, while continuous optimization techniques fail to generate valid tokens from the model's vocabulary, rendering them impractical for real-world applications. In this paper, we propose a novel technique for adversarial attacks that overcomes these limitations by leveraging regularized gradients with continuous optimization methods. Our approach is two orders of magnitude faster than the state-of-the-art greedy coordinate gradient-based method, significantly improving the attack success rate on aligned language models. Moreover, it generates valid tokens, addressing a fundamental limitation of existing continuous optimization methods. We demonstrate the effectiveness of our attack on five state-of-the-art LLMs using four datasets.
Foundation models, such as large language models, have demonstrated success in addressing various language and image processing tasks. In this work, we introduce a multi-modal foundation model for scientific problems, named PROSE-PDE. Our model, designed for bi-modality to bi-modality learning, is a multi-operator learning approach which can predict future states of spatiotemporal systems while concurrently learning the underlying governing equations of the physical system. Specifically, we focus on multi-operator learning by training distinct one-dimensional time-dependent nonlinear constant coefficient partial differential equations, with potential applications to many physical applications including physics, geology, and biology. More importantly, we provide three extrapolation studies to demonstrate that PROSE-PDE can generalize physical features through the robust training of multiple operators and that the proposed model can extrapolate to predict PDE solutions whose models or data were unseen during the training. Furthermore, we show through systematic numerical experiments that the utilization of the symbolic modality in our model effectively resolves the well-posedness problems with training multiple operators and thus enhances our model's predictive capabilities.
The supermoiré lattice, arising from the interference of multiple moiré patterns, dramatically reshapes the electronic band structure by introducing new minibands and modifying band dispersion. Concurrently, strong electronic interactions within moiré flat bands lead to the emergence of various correlated states. However, the impact of the supermoiré lattice on the flat band systems with strong interactions remains largely unexplored. Here, we report the existence of the supermoiré lattice in the mirror-symmetry-broken twisted trilayer graphene, elucidating its role in generating mini-flat bands and mini-Dirac bands. Furthermore, we demonstrate interaction-induced symmetry-broken phases in the supermoiré mini-flat bands alongside the cascade of superconductor-insulator transitions enabled by the supermoiré lattice. Our work shows that robust superconductivity can exist in the mirror-symmetry-broken TTG and underscores the significance of the supermoiré lattice as an additional degree of freedom for tuning the electronic properties in twisted multilayer systems, sheds light on the correlated quantum phases such as superconductivity in the original moiré flat bands, and highlights the potential of using the supermoiré lattice to design and simulate novel quantum phases.
There are no more papers matching your filters at the moment.