Goethe Universität Frankfurt am Main
Long-living, hot and dense plasmas generated by ultra-intense laser beams are of critical importance for laser-driven nuclear physics, bright hard X-ray sources, and laboratory astrophysics. We report the experimental observation of plasmas with nanosecond-scale lifetimes, near-solid density, and keV-level temperatures, produced by irradiating periodic arrays of composite nanowires with ultra-high contrast, relativistically intense femtosecond laser pulses. Jet-like plasma structures extending up to 1~mm from the nanowire surface were observed, emitting K-shell radiation from He-like Ti20+^{20+} ions. High-resolution X-ray spectra were analyzed using 3D Particle-in-Cell (PIC) simulations of the laser-plasma interaction combined with collisional--radiative modeling (FLYCHK). The results indicate that the jets consist of plasma with densities of 102010^{20}-102210^{22} cm3^{-3} and keV-scale temperatures, persisting for several nanoseconds. We attribute the formation of these jets to the generation of kiloTesla-scale global magnetic fields during the laser interaction, as predicted by PIC simulations. These fields may drive long-timescale current instabilities that sustain magnetic fields of several hundred tesla, sufficient to confine hot, dense plasma over nanosecond durations.
The nature of the QCD chiral phase transition in the limit of vanishing quark masses has remained elusive for a long time, since it cannot be simulated directly on the lattice and is strongly cutoff-dependent. We report on a comprehensive ongoing study using unimproved staggered fermions with Nf[2,8]N_\text{f}\in[2,8] mass-degenerate flavours on Nτ{4,6,8}N_\tau\in\{4,6,8\} lattices, in which we locate the chiral critical surface separating regions with first-order transitions from crossover regions in the bare parameter space of the lattice theory. Employing the fact that it terminates in a tricritical line, this surface can be extrapolated to the chiral limit using tricritical scaling with known exponents. Knowing the order of the transitions in the lattice parameter space, conclusions for approaching the continuum chiral limit in the proper order can be drawn. While a narrow first-order region cannot be ruled out, we find initial evidence consistent with a second-order chiral transition in all massless theories with Nf6N_\text{f}\leq 6, and possibly up to the onset of the conformal window at 9Nf129\lesssim N_\text{f}^*\lesssim 12. A reanalysis of already published O(a)\mathcal{O}(a)-improved Nf=3N_\text{f}=3 Wilson data on Nτ[4,12]N_\tau\in[4,12] is also consistent with tricritical scaling, and the associated change from first to second-order on the way to the continuum chiral limit. We discuss a modified Columbia plot and a phase diagram for many-flavour QCD that reflect these possible features.
Working in a quenched setup with Wilson twisted mass valence fermions, we explore the possibility to compute non-perturbatively the step scaling function using the coordinate (X-space) renormalization scheme. This scheme has the advantage of being on-shell and gauge invariant. The step scaling method allows us to calculate the running of the renormalization constants of quark bilinear operators. We describe here the details of this calculation. The aim of this exploratory study is to identify the feasibility of the X-space scheme when used in small volume simulations required by the step scaling technique. Eventually, we translate our final results to the continuum MSbar scheme and compare against four-loop analytic formulae finding satisfactory agreement.
Controlling and understanding electron correlations in quantum matter is one of the most challenging tasks in materials engineering. In the past years a plethora of new puzzling correlated states have been found by carefully stacking and twisting two-dimensional van der Waals materials of different kind. Unique to these stacked structures is the emergence of correlated phases not foreseeable from the single layers alone. In Ta-dichalcogenide heterostructures made of a good metallic 1H- and a Mott-insulating 1T-layer, recent reports have evidenced a cross-breed itinerant and localized nature of the electronic excitations, similar to what is typically found in heavy fermion systems. Here, we put forward a new interpretation based on first-principles calculations which indicates a sizeable charge transfer of electrons (0.4-0.6 e) from 1T to 1H layers at an elevated interlayer distance. We accurately quantify the strength of the interlayer hybridization which allows us to unambiguously determine that the system is much closer to a doped Mott insulator than to a heavy fermion scenario. Ta-based heterolayers provide therefore a new ground for quantum-materials engineering in the regime of heavily doped Mott insulators hybridized with metallic states at a van der Waals distance.
We provide an analysis of the x-dependence of the bare unpolarized, helicity and transversity iso-vector parton distribution functions (PDFs) from lattice calculations employing (maximally) twisted mass fermions. The x-dependence of the calculated PDFs resembles the one of the phenomenological parameterizations, a feature that makes this approach very promising. Furthermore, we apply momentum smearing for the relevant matrix elements to compute the lattice PDFs and find a large improvement factor when compared to conventional Gaussian smearing. This allows us to extend the lattice computation of the distributions to higher values of the nucleon momentum, which is essential for the prospects of a reliable extraction of the PDFs in the future.
We present our recent results for the tensor network (TN) approach to lattice gauge theories. TN methods provide an efficient approximation for quantum many-body states. We employ TN for one dimensional systems, Matrix Product States, to investigate the 1-flavour Schwinger model. In this study, we compute the chiral condensate at finite temperature. From the continuum extrapolation, we obtain the chiral condensate in the high temperature region consistent with the analytical calculation by Sachs and Wipf.
Gas hydrates are considered fundamental building blocks of giant icy planets like Neptune and similar exoplanets. The existence of these materials in the interiors of giant icy planets, which are subject to high pressures and temperatures, depends on their stability relative to their constituent components. In this study, we reexamine the structural stability and hydrogen content of hydrogen hydrates, (H2O)(H2)n, up to 104 GPa, focusing on hydrogen-rich materials. Using synchrotron single-crystal X-ray diffraction, Raman spectroscopy, and first-principles theoretical calculations, we find that the C2-filled ice phase undergoes a transformation to C3-filled ice phase over a broad pressure range of 47 - 104 GPa at room temperature. The C3 phase contains twice as much molecular H2 as the C2 phase. Heating the C2-filled ice above approximately 1500 K induces the transition to the C3 phase at pressures as low as 47 GPa. Upon decompression, this phase remains metastable down to 40 GPa. These findings establish new stability limits for hydrates, with implications for hydrogen storage and the interiors of planetary bodies.
In this work we study the 3+13+1-dimensional Nambu-Jona-Lasinio (NJL) model in the mean field-approximation. We carry out calculations using five different regularization schemes (two continuum and three lattice regularization schemes) with particular focus on inhomogeneous phases and condensates. The regularization schemes lead to drastically different inhomogeneous regions. We provide evidence that inhomogeneous condensates appear for all regularization schemes almost exclusively at values of the chemical potential and with wave numbers, which are of the order of or even larger than the corresponding regulators. This can be interpreted as indication that inhomogeneous phases in the 3+13+1-dimensional NJL model are rather artifacts of the regularization and not a consequence of the NJL Lagrangian and its symmetries.
Lattice QCD with heavy quarks reduces to a three-dimensional effective theory of Polyakov loops, which is amenable to series expansion methods. We analyse the effective theory in the cold and dense regime for a general number of colours, NcN_c. In particular, we investigate the transition from a hadron gas to baryon condensation. For any finite lattice spacing, we find the transition to become stronger, i.e. ultimately first-order, as NcN_c is made large. Moreover, in the baryon condensed regime, we find the pressure to scale as pNcp\sim N_c through three orders in the hopping expansion. Such a phase differs from a hadron gas with pNc0p\sim N_c^0, or a quark gluon plasma, pNc2p\sim N_c^2, and was termed quarkyonic in the literature, since it shows both baryon-like and quark-like aspects. A lattice filling with baryon number shows a rapid and smooth transition from condensing baryons to a crystal of saturated quark matter, due to the Pauli principle, and is consistent with this picture. For continuum physics, the continuum limit needs to be taken before the large NcN_c limit, which is not yet possible in practice. However, in the controlled range of lattice spacings and NcN_c-values, our results are stable when the limits are approached in this order. We discuss possible implications for physical QCD.
The ALICE Collaboration measures the production of low-mass dielectrons in pp, p-Pb and Pb-Pb collisions at the LHC. The main detectors used in the analyses are the Inner Tracking System, Time Projection Chamber and Time-Of-Flight detector, all located around mid-rapidity. The production of virtual photons relative to the inclusive yield in pp collisions is determined by analyzing the dielectron excess with respect to the expected hadronic sources. The direct photon cross section is then calculated and found to be in agreement with NLO pQCD calculations. Results from the invariant mass analysis in p-Pb collisions show an overall agreement between data and hadronic cocktail. In Pb-Pb collisions, uncorrected background-subtracted yields have been extracted in two centrality classes. A feasibility study for LHC run 3 after the ALICE upgrade indicates the possibility for a future measurement of the early effective temperature.
Understanding how multicellular organisms reliably orchestrate cell-fate decisions is a central challenge in developmental biology. This is particularly intriguing in early mammalian development, where early cell-lineage differentiation arises from processes that initially appear cell-autonomous but later materialize reliably at the tissue level. In this study, we develop a multi-scale, spatial-stochastic simulator of mouse embryogenesis, focusing on inner-cell mass (ICM) differentiation in the blastocyst stage. Our model features biophysically realistic regulatory interactions and accounts for the innate stochasticity of the biological processes driving cell-fate decisions at the cellular scale. We advance event-driven simulation techniques to incorporate relevant tissue-scale phenomena and integrate them with Simulation-Based Inference (SBI), building on a recent AI-based parameter learning method: the Sequential Neural Posterior Estimation (SNPE) algorithm. Using this framework, we carry out a large-scale Bayesian inferential analysis and determine parameter sets that reproduce the experimentally observed system behavior. We elucidate how autocrine and paracrine feedbacks via the signaling protein FGF4 orchestrate the inherently stochastic expression of fate-specifying genes at the cellular level into reproducible ICM patterning at the tissue scale. This mechanism is remarkably independent of the system size. FGF4 not only ensures correct cell lineage ratios in the ICM, but also enhances its resilience to perturbations. Intriguingly, we find that high variability in intracellular initial conditions does not compromise, but rather can enhance the accuracy and precision of tissue-level dynamics. Our work provides a genuinely spatial-stochastic description of the biochemical processes driving ICM differentiation and the necessary conditions under which it can proceed robustly.
Tensor network (TN) methods, in particular the Matrix Product States (MPS) ansatz, have proven to be a useful tool in analyzing the properties of lattice gauge theories. They allow for a very good precision, much better than standard Monte Carlo (MC) techniques for the models that have been studied so far, due to the possibility of reaching much smaller lattice spacings. The real reason for the interest in the TN approach, however, is its ability, shown so far in several condensed matter models, to deal with theories which exhibit the notorious sign problem in MC simulations. This makes it prospective for dealing with the non-zero chemical potential in QCD and other lattice gauge theories, as well as with real-time simulations. In this paper, using matrix product operators, we extend our analysis of the Schwinger model at zero temperature to show the feasibility of this approach also at finite temperature. This is an important step on the way to deal with the sign problem of QCD. We analyze in detail the chiral symmetry breaking in the massless and massive cases and show that the method works very well and gives good control over a broad range of temperatures, essentially from zero to infinite temperature.
We review recent progress on ancestral processes related to mutation-selection models, both in the deterministic and the stochastic setting. We mainly rely on two concepts, namely, the killed ancestral selection graph and the pruned lookdown ancestral selection graph. The killed ancestral selection graph gives a representation of the type of a random individual from a stationary population, based upon the individual's potential ancestry back until the mutations that define the individual's type. The pruned lookdown ancestral selection graph allows one to trace the ancestry of individuals from a stationary distribution back into the distant past, thus leading to the stationary distribution of ancestral types. We illustrate the results by applying them to a prototype model for the error threshold phenomenon.
Since Tinhofer proposed the MinGreedy algorithm for maximum cardinality matching in 1984, several experimental studies found the randomized algorithm to perform excellently for various classes of random graphs and benchmark instances. In contrast, only few analytical results are known. We show that MinGreedy cannot improve on the trivial approximation ratio 1/2 whp., even for bipartite graphs. Our hard inputs seem to require a small number of high-degree nodes. This motivates an investigation of greedy algorithms on graphs with maximum degree D: We show that MinGreedy achieves a (D-1)/(2D-3)-approximation for graphs with D=3 and for D-regular graphs, and a guarantee of (D-1/2)/(2D-2) for graphs with maximum degree D. Interestingly, our bounds even hold for the deterministic MinGreedy that breaks all ties arbitrarily. Moreover, we investigate the limitations of the greedy paradigm, using the model of priority algorithms introduced by Borodin, Nielsen, and Rackoff. We study deterministic priority algorithms and prove a (D-1)/(2D-3)-inapproximability result for graphs with maximum degree D; thus, these greedy algorithms do not achieve a 1/2+eps-approximation and in particular the 2/3-approximation obtained by the deterministic MinGreedy for D=3 is optimal in this class. For k-uniform hypergraphs we show a tight 1/k-inapproximability bound. We also study fully randomized priority algorithms and give a 5/6-inapproximability bound. Thus, they cannot compete with matching algorithms of other paradigms.
Spontaneous symmetry breaking in quantum field theories at non-zero temperature still holds fundamental open questions, in particular what happens to vacuum Goldstone bosons when the temperature is increased. By investigating a complex scalar field theory on the lattice we demonstrate that Goldstone bosons at non-zero temperature behave like screened massless particle-like excitations, so-called thermoparticles, which continue to exist even in the symmetry-restored phase of the theory. We provide non-perturbative evidence for the functional form of the Goldstone mode's dissipative behaviour, which is distinct from standard perturbative expectations, and determine its corresponding spectral properties.
Self-organization is a fundamental process of complex biological systems, particularly during the early stages of development. In the mammalian embryo, blastocyst formation exemplifies a self-organized system, involving the correct spatio-temporal segregation of three distinct cell fates: trophectoderm (TE), epiblast (EPI), and primitive endoderm (PRE). Despite the significance of this class of processes, quantifying the information content of self-organizing patterning systems remains challenging due to the complexity and the qualitative diversity of developmental mechanisms. In this study, we applied a recently proposed information-theoretical framework which quantifies the self-organization potential of cell-fate patterning systems, employing a utility function that integrates (local) positional information and (global) correlational information extracted from developmental pattern ensembles. Specifically, we examined a stochastic and spatially resolved simulation model of EPI-PRE lineage proportioning, evaluating its information content across various simulation scenarios with different numbers of system cells. To overcome the computational challenges hindering the application of this novel framework, we developed a mathematical strategy that indirectly maps the low-dimensional cell-fate counting probability space to the high-dimensional cell-fate patterning probability space, enabling the estimation of self-organization potential for general cell-fate proportioning processes. Overall, this novel information-theoretical framework provides a promising, universal approach for quantifying self-organization in developmental biology. By formalizing measures of self-organization, the employed quantification framework offers a valuable tool for uncovering insights into the underlying principles of cell-fate specification and the emergence of complexity in early developmental systems.
We present results for the xx dependence of the unpolarized, helicity, and transversity isovector quark distributions in the proton using lattice QCD, employing the method of quasi-distributions proposed by Ji in 2013. Compared to a previous calculation by us, the errors are reduced by a factor of about 2.5. Moreover, we present our first results for the polarized sector of the proton, which indicate an asymmetry in the proton sea in favor of the uu antiquarks for the case of helicity distributions, and an asymmetry in favor of the dd antiquarks for the case of transversity distributions.
Lattice QCD simulations tend to become stuck in a single topological sector at fine lattice spacing or when using chirally symmetric overlap quarks. In such cases physical observables differ from their full QCD counterparts by finite volume corrections. These systematic errors need to be understood on a quantitative level and possibly be removed. In this paper we extend an existing relation from the literature between two-point correlation functions at fixed and the corresponding hadron masses at unfixed topology by calculating all terms proportional to 1/V21/V^2 and 1/V31/V^3, where VV is the spacetime volume. Since parity is not a symmetry at fixed topology, parity mixing is comprehensively discussed. In the second part of this work we apply our equations to a simple model, quantum mechanics on a circle both for a free particle and for a square-well potential, where we demonstrate in detail, how to extract physically meaningful masses from computations or simulations at fixed topology.
It has long been understood that the inclusion of temperature in the perturbative treatment of quantum field theories leads to complications that are not present at zero temperature. In these proceedings we report on the non-perturbative obstructions that arise, and how these lead to deviations in the predictions of lattice scalar correlation functions in massive ϕ4\phi^{4} theory. Using the known non-perturbative spectral constraints satisfied by finite-temperature correlation functions we outline why the presence of distinct particle-like excitations could provide a resolution to these issues.
The intention of these lecture notes is to outline the basics of lattice hadron spectroscopy to students from other fields of physics, e.g. from experimental particle physics, who do not necessarily have a background in quantum field theory. After a brief motivation and discussion of QCD, it is explained, how QCD can in principle be solved numerically using lattice QCD. The main part of these lecture notes is concerned with quantum numbers of hadrons, corresponding hadron creation operators, and how the mass of a hadron can be determined from a temporal correlation function of such operators. Finally, three recent lattice hadron spectroscopy examples from the literature are discussed on an elementary level.
There are no more papers matching your filters at the moment.