VU University Amsterdam
We study critical percolation on a regular planar lattice. Let EG(n)E_G(n) be the expected number of open clusters intersecting or hitting the line segment [0,n][0,n]. (For the subscript GG we either take H\mathbb{H}, when we restrict to the upper halfplane, or C\mathbb{C}, when we consider the full lattice). Cardy (2001) (see also Yu, Saleur and Haas (2008)) derived heuristically that EH(n)=An+34πlog(n)+o(log(n))E_{\mathbb{H}}(n) = An + \frac{\sqrt{3}}{4\pi}\log(n) + o(\log(n)), where AA is some constant. Recently Kovács, Iglói and Cardy (2012) derived heuristically (as a special case of a more general formula) that a similar result holds for EC(n)E_{\mathbb{C}}(n) with the constant 34π\frac{\sqrt{3}}{4\pi} replaced by 5332π\frac{5\sqrt{3}}{32\pi}. In this paper we give, for site percolation on the triangular lattice, a rigorous proof for the formula of EH(n)E_{\mathbb{H}}(n) above, and a rigorous upper bound for the prefactor of the logarithm in the formula of EC(n)E_{\mathbb{C}}(n).
Language models trained on large amounts of data are known to produce inappropriate content in some cases and require careful tuning to be used in the real world. We revisit an effective and modular approach for controllability of the language models, when an external expert model guides the decoding. Particularly, we zoom in into the parametrization choice of an external expert, highlighting the difference between low-rank and higher-rank parametrizations. Higher-rank experts are designed to support high flexibility when representing the rewards, leading to higher computational costs during decoding. However, we demonstrate that they might not use their full flexibility. By analyzing the recently proposed reward-augmented decoding approach (RAD), which uses a higher-rank expert model, we introduce a simpler but more efficient low-rank parametrization of the expert model enabling fast and effective guided decoding. We empirically show that the low-rank RAD performs on par with the more flexible RAD on a detoxification and a sentiment control task, while requiring only a single reward model call per generated token.
We introduce natural strategic games on graphs, which capture the idea of coordination in a local setting. We study the existence of equilibria that are resilient to coalitional deviations of unbounded and bounded size (i.e., strong equilibria and k-equilibria respectively). We show that pure Nash equilibria and 2-equilibria exist, and give an example in which no 3-equilibrium exists. Moreover, we prove that strong equilibria exist for various special cases. We also study the price of anarchy (PoA) and price of stability (PoS) for these solution concepts. We show that the PoS for strong equilibria is 1 in almost all of the special cases for which we have proven strong equilibria to exist. The PoA for pure Nash equilbria turns out to be unbounded, even when we fix the graph on which the coordination game is to be played. For the PoA for k-equilibria, we show that the price of anarchy is between 2(n-1)/(k-1) - 1 and 2(n-1)/(k-1). The latter upper bound is tight for k=nk=n (i.e., strong equilibria). Finally, we consider the problems of computing strong equilibria and of determining whether a joint strategy is a k-equilibrium or strong equilibrium. We prove that, given a coordination game, a joint strategy s, and a number k as input, it is co-NP complete to determine whether s is a k-equilibrium. On the positive side, we give polynomial time algorithms to compute strong equilibria for various special cases.
This report synthesizes current understanding of hadron structure, outlining progress in determining parton distribution functions (PDFs), Generalized Parton Distributions (GPDs), and Transverse Momentum Dependent (TMD) PDFs using both global QCD analysis and lattice QCD. It aims to bridge these communities, detailing the state-of-the-art in extracting multi-dimensional descriptions of quarks and gluons within hadrons.
Consider critical bond percolation on a large 2n by 2n box on the square lattice. It is well-known that the size (i.e. number of vertices) of the largest open cluster is, with high probability, of order n^2 \pi(n), where \pi(n) denotes the probability that there is an open path from the center to the boundary of the box. The same result holds for the second-largest cluster, the third largest cluster etcetera. Jarai showed that the differences between the sizes of these clusters is, with high probability, at least of order \sqrt{n^2 \pi(n)}. Although this bound was enough for his applications (to incipient infinite clusters), he believed, but had no proof, that the differences are in fact of the same order as the cluster sizes themselves, i.e. n^2 \pi(n). Our main result is a proof that this is indeed the case.
In this section, we discuss some basic features of transverse momentum dependent, or unintegrated, parton distribution functions. In particular, when these correlation functions are combined in a factorization formulae with hard processes beyond the simplest cases, there are basic problems with universality and factorization. We discuss some of these problems as well as the opportunities that they offer.
Economists and social scientists have debated the relative importance of nature (one's genes) and nurture (one's environment) for decades, if not centuries. This debate can now be informed by the ready availability of genetic data in a growing number of social science datasets. This paper explores the potential uses of genetic data in economics, with a focus on estimating the interplay between nature (genes) and nurture (environment). We discuss how economists can benefit from incorporating genetic data into their analyses even when they do not have a direct interest in estimating genetic effects. We argue that gene--environment (GxE) studies can be instrumental for (i) testing economic theory, (ii) uncovering economic or behavioral mechanisms, and (iii) analyzing treatment effect heterogeneity, thereby improving the understanding of how (policy) interventions affect population subgroups. We introduce the reader to essential genetic terminology, develop a conceptual economic model to interpret gene-environment interplay, and provide practical guidance to empirical researchers.
From human crowds to cells in tissue, the detection and efficient tracking of multiple objects in dense configurations is an important and unsolved problem. In the past, limitations of image analysis have restricted studies of dense groups to tracking a single or subset of marked individuals, or to coarse-grained group-level dynamics, all of which yield incomplete information. Here, we combine convolutional neural networks (CNNs) with the model environment of a honeybee hive to automatically recognize all individuals in a dense group from raw image data. We create new, adapted individual labeling and use the segmentation architecture U-Net with a loss function dependent on both object identity and orientation. We additionally exploit temporal regularities of the video recording in a recurrent manner and achieve near human-level performance while reducing the network size by 94% compared to the original U-Net architecture. Given our novel application of CNNs, we generate extensive problem-specific image data in which labeled examples are produced through a custom interface with Amazon Mechanical Turk. This dataset contains over 375,000 labeled bee instances across 720 video frames at 2 FPS, representing an extensive resource for the development and testing of tracking methods. We correctly detect 96% of individuals with a location error of ~7% of a typical body dimension, and orientation error of 12 degrees, approximating the variability of human raters. Our results provide an important step towards efficient image-based dense object tracking by allowing for the accurate determination of object location and orientation across time-series image data efficiently within one network architecture.
Dynamical systems with a network structure can display collective behaviour such as synchronisation. Golubitsky and Stewart observed that all the robustly synchronous dynamics of a network is contained in the dynamics of its quotient networks. DeVille and Lerman have recently shown that the original network and its quotients are related by graph fibrations and hence their dynamics are conjugate. This paper demonstrates the importance of self-fibrations of network graphs. Self-fibrations give rise to symmetries in the dynamics of a network. We show that every homogeneous network admits a lift with self-fibrations and that every robust synchrony in this lift is determined by the symmetries of its dynamics. These symmetries moreover impact the global dynamics of network systems and can be used to explain and predict generic scenarios for synchrony breaking. We also discuss networks with interior symmetries and nonhomogeneous networks.
The advanced interferometer network will herald a new era in observational astronomy. There is a very strong science case to go beyond the advanced detector network and build detectors that operate in a frequency range from 1 Hz-10 kHz, with sensitivity a factor ten better in amplitude. Such detectors will be able to probe a range of topics in nuclear physics, astronomy, cosmology and fundamental physics, providing insights into many unsolved problems in these areas.
This paper develops a mathematical framework to study signal networks, in which nodes can be active or inactive, and their activation or deactivation is driven by external signals and the states of the nodes to which they are connected via links. The focus is on determining the optimal number of key nodes (= highly connected and structurally important nodes) required to represent the global activation state of the network accurately. Motivated by neuroscience, medical science, and social science examples, we describe the node dynamics as a continuous-time inhomogeneous Markov process. Under mean-field and homogeneity assumptions, appropriate for large scale-free and disassortative signal networks, we derive differential equations characterising the global activation behaviour and compute the expected hitting time to network triggering. Analytical and numerical results show that two or three key nodes are typically sufficient to approximate the overall network state well, balancing sensitivity and robustness. Our findings provide insight into how natural systems can efficiently aggregate information by exploiting minimal structural components.
We study critical percolation on a regular planar lattice. Let EG(n)E_G(n) be the expected number of open clusters intersecting or hitting the line segment [0,n][0,n]. (For the subscript GG we either take H\mathbb{H}, when we restrict to the upper halfplane, or C\mathbb{C}, when we consider the full lattice). Cardy (2001) (see also Yu, Saleur and Haas (2008)) derived heuristically that EH(n)=An+34πlog(n)+o(log(n))E_{\mathbb{H}}(n) = An + \frac{\sqrt{3}}{4\pi}\log(n) + o(\log(n)), where AA is some constant. Recently Kovács, Iglói and Cardy (2012) derived heuristically (as a special case of a more general formula) that a similar result holds for EC(n)E_{\mathbb{C}}(n) with the constant 34π\frac{\sqrt{3}}{4\pi} replaced by 5332π\frac{5\sqrt{3}}{32\pi}. In this paper we give, for site percolation on the triangular lattice, a rigorous proof for the formula of EH(n)E_{\mathbb{H}}(n) above, and a rigorous upper bound for the prefactor of the logarithm in the formula of EC(n)E_{\mathbb{C}}(n).
University of Washington logoUniversity of WashingtonCNRS logoCNRSCalifornia Institute of Technology logoCalifornia Institute of TechnologyUniversity of Cambridge logoUniversity of CambridgeINFN Sezione di NapoliMonash University logoMonash UniversityNational Central UniversityNational Astronomical Observatory of JapanGhent UniversityNikhefGeorgia Institute of Technology logoGeorgia Institute of TechnologyTsinghua University logoTsinghua UniversityStanford University logoStanford UniversityThe Chinese University of Hong Kong logoThe Chinese University of Hong KongUniversity of MelbourneUniversity of WarsawNASA Goddard Space Flight Center logoNASA Goddard Space Flight CenterInternational Centre for Theoretical Sciences, Tata Institute of Fundamental ResearchUniversity of Florida logoUniversity of FloridaINFN Sezione di PisaUniversity of Southampton logoUniversity of SouthamptonUniversity of Minnesota logoUniversity of MinnesotaUniversity of Maryland logoUniversity of MarylandCollège de FranceThe University of Hong Kong logoThe University of Hong KongUniversity of Tokyo logoUniversity of TokyoNational Taiwan Normal UniversityUniversité Paris-Saclay logoUniversité Paris-SaclayChennai Mathematical InstituteIndian Institute of Technology, BombayUniversiteit GentSorbonne Université logoSorbonne UniversitéCharles Sturt UniversityAustralian National University logoAustralian National UniversityMIT logoMITUniversity of GlasgowUniversity of PotsdamLeibniz Universität HannoverFriedrich-Schiller-Universität JenaIndian Institute of Technology MadrasUniversity of StrathclydeWigner Research Centre for PhysicsSyracuse UniversityNicolaus Copernicus Astronomical Center, Polish Academy of SciencesInstituto Nacional de Pesquisas EspaciaisUniversitat de ValènciaUniversità di CamerinoUniversitat de les Illes BalearsUniversité de LiègeLomonosov Moscow State UniversityUniversité Côte d’AzurUniversità di TriesteCalifornia State University, Long BeachGran Sasso Science Institute (GSSI)University of OregonSwinburne University of TechnologyCalifornia State University, FullertonNational Tsing-Hua UniversityThe University of Western AustraliaEötvös Loránd UniversityBar Ilan UniversityIndian Institute of Technology GandhinagarMax Planck Institute for Gravitational Physics (Albert Einstein Institute)INFN, Sezione di TorinoUniversidad de La LagunaIndian Institute of Technology HyderabadUniversità di Napoli Federico IIEmbry-Riddle Aeronautical UniversityObservatoire de la Côte d’AzurAichi University of EducationInter-University Centre for Astronomy and AstrophysicsIndian Institute of Technology IndoreMontana State UniversityINFN Sezione di PerugiaCNRS/IN2P3National Institute of Advanced Industrial Science and Technology (AIST)INFN - Sezione di PadovaIJCLabUniv. Savoie Mont BlancLaboratoire Kastler BrosselUniversità degli Studi di Urbino ’Carlo Bo’Université de RennesUniversità di PalermoENS-PSL Research UniversityINFN-Sezione di GenovaUniversidad de GuadalajaraUniversiteit AntwerpenThe University of MississippiINFN Sezione di RomaIndian Institute of Technology PalakkadFukuoka UniversityKorea Institute of Science and Technology InformationINFN Sezione di Roma Tor VergataLIGO Hanford ObservatoryINFN Laboratori Nazionali del SudVU University AmsterdamNational Institute for Mathematical SciencesLaboratoire de Physique Subatomique et de CosmologieUniversità degli Studi di SassariEuropean Gravitational Observatory (EGO)Instituto de Física Teórica (IFT)Laboratoire d’Annecy de Physique des Particules (LAPP)Academia Sinica, Institute of PhysicsInstitut FOTON - UMR 6082UAM/CSICCentre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules (IN2P3)* National and Kapodistrian University of AthensUniversit catholique de LouvainUniversit Grenoble AlpesUniversit degli Studi di GenovaUniversit degli Studi di PerugiaUniversit di TrentoUniversit di SalernoUniversit di Roma La SapienzaUniversit Paris CitUniversit di PisaUniversit di PadovaUniversit degli Studi di Milano-BicoccaUniversit degli Studi di TorinoUniversit di Roma Tor VergataINFN Sezione di TriesteUniversity of Wisconsin ","Milwaukee
We describe a search for gravitational waves from compact binaries with at least one component with mass 0.2 MM_\odot -- 1.0M1.0 M_\odot and mass ratio $q \geq 0.1$ in Advanced LIGO and Advanced Virgo data collected between 1 November 2019, 15:00 UTC and 27 March 2020, 17:00 UTC. No signals were detected. The most significant candidate has a false alarm rate of 0.2 yr1\mathrm{yr}^{-1}. We estimate the sensitivity of our search over the entirety of Advanced LIGO's and Advanced Virgo's third observing run, and present the most stringent limits to date on the merger rate of binary black holes with at least one subsolar-mass component. We use the upper limits to constrain two fiducial scenarios that could produce subsolar-mass black holes: primordial black holes (PBH) and a model of dissipative dark matter. The PBH model uses recent prescriptions for the merger rate of PBH binaries that include a rate suppression factor to effectively account for PBH early binary disruptions. If the PBHs are monochromatically distributed, we can exclude a dark matter fraction in PBHs fPBH0.6f_\mathrm{PBH} \gtrsim 0.6 (at 90% confidence) in the probed subsolar-mass range. However, if we allow for broad PBH mass distributions we are unable to rule out fPBH=1f_\mathrm{PBH} = 1. For the dissipative model, where the dark matter has chemistry that allows a small fraction to cool and collapse into black holes, we find an upper bound f_{\mathrm{DBH}} < 10^{-5} on the fraction of atomic dark matter collapsed into black holes.
In this paper we introduce a novel Bayesian data augmentation approach for estimating the parameters of the generalised logistic regression model. We propose a P\'olya-Gamma sampler algorithm that allows us to sample from the exact posterior distribution, rather than relying on approximations. A simulation study illustrates the flexibility and accuracy of the proposed approach to capture heavy and light tails in binary response data of different dimensions. The methodology is applied to two different real datasets, where we demonstrate that the P\'olya-Gamma sampler provides more precise estimates than the empirical likelihood method, outperforming approximate approaches.
The AI community is increasingly putting its attention towards combining symbolic and neural approaches, as it is often argued that the strengths and weaknesses of these approaches are complementary. One recent trend in the literature are weakly supervised learning techniques that employ operators from fuzzy logics. In particular, these use prior background knowledge described in such logics to help the training of a neural network from unlabeled and noisy data. By interpreting logical symbols using neural networks, this background knowledge can be added to regular loss functions, hence making reasoning a part of learning. We study, both formally and empirically, how a large collection of logical operators from the fuzzy logic literature behave in a differentiable learning setting. We find that many of these operators, including some of the most well-known, are highly unsuitable in this setting. A further finding concerns the treatment of implication in these fuzzy logics, and shows a strong imbalance between gradients driven by the antecedent and the consequent of the implication. Furthermore, we introduce a new family of fuzzy implications (called sigmoidal implications) to tackle this phenomenon. Finally, we empirically show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and compare how different operators behave in practice. We find that, to achieve the largest performance improvement over a supervised baseline, we have to resort to non-standard combinations of logical operators which perform well in learning, but no longer satisfy the usual logical laws.
A polarized ep/eAep/eA collider (Electron--Ion Collider, or EIC), with polarized proton and light-ion beams and unpolarized heavy-ion beams with a variable center--of--mass energy s20\sqrt{s} \sim 20 to 100\sim100~GeV (upgradable to 150\sim 150 GeV) and a luminosity up to $\sim 10^{34} \, \textrm{cm}^{-2} \textrm{s}^{-1}$, would be uniquely suited to address several outstanding questions of Quantum Chromodynamics, and thereby lead to new qualitative and quantitative information on the microscopic structure of hadrons and nuclei. During this meeting at Jefferson Lab we addressed recent theoretical and experimental developments in the spin and the three--dimensional structure of the nucleon (sea quark and gluon spatial distributions, orbital motion, polarization, and their correlations). This mini--review contains a short update on progress in these areas since the EIC White paper~\cite{Accardi:2012qut}.
Applying natural language processing for mining and intelligent information access to tweets (a form of microblog) is a challenging, emerging research area. Unlike carefully authored news text and other longer content, tweets pose a number of new challenges, due to their short, noisy, context-dependent, and dynamic nature. Information extraction from tweets is typically performed in a pipeline, comprising consecutive stages of language identification, tokenisation, part-of-speech tagging, named entity recognition and entity disambiguation (e.g. with respect to DBpedia). In this work, we describe a new Twitter entity disambiguation dataset, and conduct an empirical analysis of named entity recognition and disambiguation, investigating how robust a number of state-of-the-art systems are on such noisy texts, what the main sources of error are, and which problems should be further investigated to improve the state of the art.
10 Sep 2014
Spontaneous Rayleigh-Brillouin scattering experiments in air, N2 and O2 have been performed for a wide range of temperatures and pressures at a wavelength of 403 nm and at a 90 degrees scattering angle. Measurements of the Rayleigh-Brillouin spectral scattering profile were conducted at high signal-to-noise ratio for all three species, yielding high-quality spectra unambiguously showing the small differences between scattering in air, and its constituents N2 and O2. Comparison of the experimental spectra with calculations using the Tenti S6 model, developed in 1970s based on linearized kinetic equations for molecular gases, demonstrates that this model is valid to high accuracy. After previous measurements performed at 366 nm, the Tenti S6 model is here verified for a second wavelength of 403 nm. Values for the bulk viscosity for the gases are derived by optimizing the model to the measurements. It is verified that the bulk viscosity parameters obtained from previous experiments at 366 nm, are valid for wavelengths of 403 nm. Also for air, which is treated as a single-component gas with effective gas transport coefficients, the Tenti S6 treatment is validated for 403 nm as for the previously used wavelength of 366 nm, yielding an accurate model description of the scattering profiles for a range of temperatures and pressures, including those of relevance for atmospheric studies. It is concluded that the Tenti S6 model, further verified in the present study, is applicable to LIDAR applications for exploring the wind velocity and the temperature profile distributions of the Earth's atmosphere. Based on the present findings, predictions can be made on the spectral profiles for a typical LIDAR backscatter geometry, which deviate by some 7 percent from purely Gaussian profiles at realistic sub-atmospheric pressures occurring at 3-5 km altitude in the Earth's atmosphere.
We review recent progress in the description of unpolarized transverse-momentum-dependent (TMD) gluon distributions at small xx in the color glass condensate (CGC) effective theory. We discuss the origin of the non-universality of TMD gluon distributions in the TMD factorization framework and in the CGC theory and the equivalence of the two approaches in their overlapping domain of validity. We show some applications of this equivalence, including recent results on the behavior of TMD gluon distributions at small xx, and on the study of gluon saturation. We discuss recent advances in the unification of the TMD evolution and the non-linear small-xx evolution of gluon distributions.
It is rare that texts or entire books written in a Controlled Natural Language (CNL) become very popular, but exactly this has happened with a book that has been published last year. Randall Munroe's Thing Explainer uses only the 1'000 most often used words of the English language together with drawn pictures to explain complicated things such as nuclear reactors, jet engines, the solar system, and dishwashers. This restricted language is a very interesting new case for the CNL community. I describe here its place in the context of existing approaches on Controlled Natural Languages, and I provide a first analysis from a scientific perspective, covering the word production rules and word distributions.
There are no more papers matching your filters at the moment.