Here's my plan:
We give an arithmetic count of the lines on a smooth cubic surface over an arbitrary field kk, generalizing the counts that over C\mathbb{C} there are 2727 lines, and over R\mathbb{R} the number of hyperbolic lines minus the number of elliptic lines is 33. In general, the lines are defined over a field extension LL and have an associated arithmetic type α\alpha in L/(L)2L^*/(L^*)^2. There is an equality in the Grothendieck-Witt group GW(k)\operatorname{GW}(k) of kk \sum_{\text{lines}} \operatorname{Tr}_{L/k} \langle \alpha \rangle = 15 \cdot \langle 1 \rangle + 12 \cdot \langle -1 \rangle, where TrL/k\operatorname{Tr}_{L/k} denotes the trace GW(L)GW(k)\operatorname{GW}(L) \to \operatorname{GW}(k). Taking the rank and signature recovers the results over C\mathbb{C} and R\mathbb{R}. To do this, we develop an elementary theory of the Euler number in A1\mathbb{A}^1-homotopy theory for algebraic vector bundles. We expect that further arithmetic counts generalizing enumerative results in complex and real algebraic geometry can be obtained with similar methods.
University of Washington logoUniversity of WashingtonMichigan State University logoMichigan State UniversityUniversity of CanterburyDESYGeorgia Institute of Technology logoGeorgia Institute of TechnologySungkyunkwan UniversityUniversity of California, Irvine logoUniversity of California, IrvineUniversity of Copenhagen logoUniversity of CopenhagenOhio State UniversityPennsylvania State UniversityColumbia University logoColumbia UniversityAarhus UniversityUniversity of Pennsylvania logoUniversity of PennsylvaniaUniversity of Maryland logoUniversity of MarylandUniversity of Wisconsin-Madison logoUniversity of Wisconsin-MadisonUniversity of Alberta logoUniversity of AlbertaUniversity of RochesterMIT logoMITChiba UniversityUniversity of GenevaKarlsruhe Institute of Technology logoKarlsruhe Institute of TechnologyUniversity of DelhiUniversität OldenburgNiels Bohr InstituteUniversity of AlabamaUniversity of South DakotaUniversity of California BerkeleyRuhr-Universität BochumUniversity of AdelaideKobe UniversityTechnische Universität DortmundUniversity of KansasUniversity of California, Santa Cruz logoUniversity of California, Santa CruzUniversity of California RiversideUniversity of WürzburgUniversität MünsterErlangen Centre for Astroparticle PhysicsUniversity of MainzUniversity of Alaska AnchorageSouthern University and A&M CollegeBartol Research InstituteNational Chiao Tung UniversityUniversität WuppertalDelaware State UniversityOskar Klein CentreTHOUGHTHere's my plan:THINK:1. Scan the list of authors and their numerical affiliations.2. Look at the numbered list of affiliations at the end of the author list (it's cut off, but I'll process what's available).3. Identify the distinct organization names from these affiliations.4. Ensure these are actual organizations and not departments or general terms.Universit Libre de BruxellesRWTH Aachen University":Vrije Universiteit Brussel
The LIGO/Virgo collaboration published the catalogs GWTC-1, GWTC-2.1 and GWTC-3 containing candidate gravitational-wave (GW) events detected during its runs O1, O2 and O3. These GW events can be possible sites of neutrino emission. In this paper, we present a search for neutrino counterparts of 90 GW candidates using IceCube DeepCore, the low-energy infill array of the IceCube Neutrino Observatory. The search is conducted using an unbinned maximum likelihood method, within a time window of 1000 s and uses the spatial and timing information from the GW events. The neutrinos used for the search have energies ranging from a few GeV to several tens of TeV. We do not find any significant emission of neutrinos, and place upper limits on the flux and the isotropic-equivalent energy emitted in low-energy neutrinos. We also conduct a binomial test to search for source populations potentially contributing to neutrino emission. We report a non-detection of a significant neutrino-source population with this test.
Comets have similar compositions to interstellar medium ices, suggesting at least some of their molecules maybe inherited from an earlier stage of evolution. To investigate the degree to which this might have occurred we compare the composition of individual comets to that of the well-studied protostellar region IRAS 16293-2422B. We show that the observed molecular abundance ratios in several comets correlate well with those observed in the protostellar source. However, this does not necessarily mean that the cometary abundances are identical to protostellar. We find the abundance ratios of many molecules present in comets are enhanced compared to their protostellar counterparts. For COH-molecules, the data suggest higher abundances relative to methanol of more complex species, e.g. HCOOH, CH3CHO, and HCOOCH3, are found in comets. For N-bearing molecules, the ratio of nitriles relative to CH3CN -- HC3N/CH3CN and HCN/CH3CN -- tend to be enhanced. The abundances of cometary SO and SO2 relative to H2S are enhanced, whereas OCS/H2S is reduced. Using a subset of comets with a common set of observed molecules we suggest a possible means of determining the relative degree to which they retain interstellar ices. This analysis suggests that over 84% of COH-bearing molecules can be explained by the protostellar composition. The possible fraction inherited from the protostellar region is lower for N-molecules at only 26--74%. While this is still speculative, especially since few comets have large numbers of observed molecules, it provides a possible route for determining the relative degree to which comets contain disk-processed material.
Large particle sorters have potential applications in sorting microplastics and large biomaterials (>50 micrometer), such as tissues, spheroids, organoids, and embryos. Though great advancements have been made in image-based sorting of cells and particles (<50 micrometer), their translation for high-throughput sorting of larger biomaterials and particles (>50 micrometer) has been more limited. An image-based detection technique is highly desirable due to richness of the data (including size, shape, color, morphology, and optical density) that can be extracted from live images of individualized biomaterials or particles. Such a detection technique is label-free and can be integrated with a contact-free actuation mechanism such as one based on traveling surface acoustic waves (TSAWs). Recent advances in using TSAWs for sorting cells and particles (<50 micrometer) have demonstrated short response times (<1 ms), high biocompatibility, and reduced energy requirements to actuate. Additionally, TSAW-based devices are miniaturized and easier to integrate with an image-based detection technique. In this work, a high-throughput image-detection based large particle microfluidic sorting technique is implemented. The technique is used to separate binary mixtures of small and large polyethylene particles (ranging between ~45-180 micrometer in size). All particles in flow were first optically interrogated for size, followed by actuations using momentum transfer from TSAW pulses, if they satisfied the size cutoff criterion. The effect of control parameters such as duration and power of TSAW actuation pulse, inlet flow rates, and sample dilution on sorting efficiency and throughput was observed. At the chosen conditions, this sorting technique can sort on average ~4.9-34.3 particles/s (perform ~2-3 actuations/s), depending on the initial sample composition and concentration.
Network models are increasingly vital in psychometrics for analyzing relational data, which are often accompanied by high-dimensional node attributes. Joint latent space models (JLSM) provide an elegant framework for integrating these data sources by assuming a shared underlying latent representation; however, a persistent methodological challenge is determining the dimension of the latent space, as existing methods typically require pre-specification or rely on computationally intensive post-hoc procedures. We develop a novel Bayesian joint latent space model that incorporates a cumulative ordered spike-and-slab (COSS) prior. This approach enables the latent dimension to be inferred automatically and simultaneously with all model parameters. We develop an efficient Markov Chain Monte Carlo (MCMC) algorithm for posterior computation. Theoretically, we establish that the posterior distribution concentrates on the true latent dimension and that parameter estimates achieve Hellinger consistency at a near-optimal rate that adapts to the unknown dimensionality. Through extensive simulations and two real-data applications, we demonstrate the method's superior performance in both dimension recovery and parameter estimation. Our work offers a principled, computationally efficient, and theoretically grounded solution for adaptive dimension selection in psychometric network models.
We continue the study of operator algebras over the pp-adic integers, initiated in our previous work [1]. In this sequel, we develop further structural results and provide new families of examples. We introduce the notion of pp-adic von Neumann algebras, and analyze those with trivial center, that we call ''factors''. In particular we show that ICC groups provide examples of factors. We then establish a characterization of pp-simplicity for groupoid operator algebras, showing its relation to effectiveness and minimality. A central part of the paper is devoted to a pp-adic analogue of the GNS construction, leading to a representation theorem for Banach ^*-algebras over Zp\mathbb{Z}_p. As applications, we exhibit large classes of pp-adic operator algebras, including residually finite-rank algebras and affinoid algebras with the spectral norm. Finally, we investigate the KK-theory of pp-adic operator algebras, including the computation of homotopy analytic KK-theory of continuous Zp\mathbb{Z}_p-valued functions on a compact Hausdorff space and the analytic (non-homotopy invariant) KK-theory of certain pp-adically complete Banach algebras in terms of continuous KK-theory. Together, these results extend the foundations of the emerging theory of pp-adic operator algebras.
Being motivated by the delivery of drugs and vaccines through subcutaneous (SC) injection in human bodies, a theoretical investigation is performed using a two-dimensional mathematical model in the cartesian coordinate. In general, a large variety of biological tissues behave as deformable porous material with anisotropic hydraulic conductivity. Consequently, one can adopt the field equations of mixture theory to describe the behavior of the interstitial fluid and adipose cell present in the subcutaneous layer of skin. During the procedure, a medical person takes a big pinch of the skin of the injection application area between the thumb and index finger and holds. This process pulls the fatty tissue away from the muscle and makes the injection process easier. In this situation, the small aspect ratio (denoted as δ\delta) of the subcutaneous layer (SCL) i.e., δ20.01\delta^2\sim0.01 would simplify the governing equation for tissue dynamics as it becomes a perturbation parameter. This study highlights the issue of the mechanical response of the adipose tissue in terms of the anisotropic hydraulic conductivity variation, the viscosity of the injected drug, the mean depth of subcutaneous tissue, etc. In particular, the computed stress fields can measure the intensity of pain to be experienced by a patient after this procedure. Also, this study discusses the biomechanical impact of the creation of one or more eddy structures (s) near the area of applying injection, which is due to high pressure developed there, increased tissue anisotropy, fluid viscosity, etc.
Many research questions -- particularly those in environmental health -- do not involve binary exposures. In environmental epidemiology, this includes multivariate exposure mixtures with nondiscrete components. Causal inference estimands and estimators to quantify the relationship between an exposure mixture and an outcome are relatively few. We propose an approach to quantify a relationship between a shift in the exposure mixture and the outcome -- either in the single timepoint or longitudinal setting. The shift in the exposure mixture can be defined flexibly in terms of shifting one or more components, including examining interaction between mixture components, and in terms of shifting the same or different amounts across components. The estimand we discuss has a similar interpretation as a main effect regression coefficient. First, we focus on choosing a shift in the exposure mixture supported by observed data. We demonstrate how to assess extrapolation and modify the shift to minimize reliance on extrapolation. Second, we propose estimating the relationship between the exposure mixture shift and outcome completely nonparametrically, using machine learning in model-fitting. This is in contrast to other current approaches, which employ parametric modeling for at least some relationships, which we would like to avoid because parametric modeling assumptions in complex, nonrandomized settings are tenuous at best. We are motivated by longitudinal data on pesticide exposures among participants in the CHAMACOS Maternal Cognition cohort. We examine the relationship between longitudinal exposure to agricultural pesticides and risk of hypertension. We provide step-by-step code to facilitate the easy replication and adaptation of the approaches we use.
Recent advancements in artificial intelligence (AI) have seen the emergence of smart video surveillance (SVS) in many practical applications, particularly for building safer and more secure communities in our urban environments. Cognitive tasks, such as identifying objects, recognizing actions, and detecting anomalous behaviors, can produce data capable of providing valuable insights to the community through statistical and analytical tools. However, artificially intelligent surveillance systems design requires special considerations for ethical challenges and concerns. The use and storage of personally identifiable information (PII) commonly pose an increased risk to personal privacy. To address these issues, this paper identifies the privacy concerns and requirements needed to address when designing AI-enabled smart video surveillance. Further, we propose the first end-to-end AI-enabled privacy-preserving smart video surveillance system that holistically combines computer vision analytics, statistical data analytics, cloud-native services, and end-user applications. Finally, we propose quantitative and qualitative metrics to evaluate intelligent video surveillance systems. The system shows the 17.8 frame-per-second (FPS) processing in extreme video scenes. However, considering privacy in designing such a system results in preferring the pose-based algorithm to the pixel-based one. This choice resulted in dropping accuracy in both action and anomaly detection tasks. The results drop from 97.48 to 73.72 in anomaly detection and 96 to 83.07 in the action detection task. On average, the latency of the end-to-end system is 36.1 seconds.
A hereditary property of graphs is a collection of graphs which is closed under taking induced subgraphs. The speed of ¶is the function n \mapsto |¶_n|, where ¶_n denotes the graphs of order n in ¶. It was shown by Alekseev, and by Bollobas and Thomason, that if ¶is a hereditary property of graphs then |¶_n| = 2^{(1 - 1/r + o(1))n^2/2}, where r = r(¶) \in \N is the so-called `colouring number' of ¶. However, their results tell us very little about the structure of a typical graph G \in ¶. In this paper we describe the structure of almost every graph in a hereditary property of graphs, ¶. As a consequence, we derive essentially optimal bounds on the speed of ¶, improving the Alekseev-Bollobas-Thomason Theorem, and also generalizing results of Balogh, Bollobas and Simonovits.
In this paper, we establish the invertibility of the Berezin transform of the symbol as a necessary and sufficient condition for the invertibility of the Toeplitz operator on the Bergman space La2(D)L^2_a(\mathbb{D}). More precisely, if ϕ=cg+dgˉ{\phi} = c g + d \bar{g}, where c,dCc,d\in\mathbb{C} and gH(D)g\in H^{\infty}(\mathbb{D}), the space of all bounded analytic functions, then TϕT_{\phi} is invertible on La2(D)L^2_a(\mathbb{D}) if and only if infzDϕ~(z)=infzDϕ(z)>0\inf\limits_{z\in \mathbb{D}}\left|\widetilde{\,{\phi}}(z)\right|=\inf\limits_{z\in \mathbb{D}}|\phi(z)|>0, where ϕ~\widetilde{\,{\phi}} is the Berezin transform of ϕ\phi.
Let Ω\Omega be a Lipschitz domain in Rd\mathbb R^d, and let Aε=divA(x,x/ε)\mathcal A^\varepsilon=-\operatorname{div}A(x,x/\varepsilon)\nabla be a strongly elliptic operator on Ω\Omega. We suppose that ε\varepsilon is small and the function AA is Lipschitz in the first variable and periodic in the second, so the coefficients of Aε\mathcal A^\varepsilon are locally periodic and rapidly oscillate. Given μ\mu in the resolvent set, we are interested in finding the rates of approximations, as ε0\varepsilon\to0, for (Aεμ)1(\mathcal A^\varepsilon-\mu)^{-1} and (Aεμ)1\nabla(\mathcal A^\varepsilon-\mu)^{-1} in the operator topology on LpL_p for suitable pp. It is well-known that the rates depend on regularity of the effective operator A0\mathcal A^0. We prove that if (A0μ)1(\mathcal A^0-\mu)^{-1} and its adjoint are bounded from Lp(Ω)nL_p(\Omega)^n to the Lipschitz--Besov space Λp1+s(Ω)n\Lambda_p^{1+s}(\Omega)^n with s(0,1]s\in(0,1], then the rates are, respectively, εs\varepsilon^s and εs/p\varepsilon^{s/p}. The results are applied to the Dirichlet, Neumann and mixed Dirichlet--Neumann problems for strongly elliptic operators with uniformly bounded and VMO\operatorname{VMO} coefficients.
We study the matrix completion problem that leverages hierarchical similarity graphs as side information in the context of recommender systems. Under a hierarchical stochastic block model that well respects practically-relevant social graphs and a low-rank rating matrix model, we characterize the exact information-theoretic limit on the number of observed matrix entries (i.e., optimal sample complexity) by proving sharp upper and lower bounds on the sample complexity. In the achievability proof, we demonstrate that probability of error of the maximum likelihood estimator vanishes for sufficiently large number of users and items, if all sufficient conditions are satisfied. On the other hand, the converse (impossibility) proof is based on the genie-aided maximum likelihood estimator. Under each necessary condition, we present examples of a genie-aided estimator to prove that the probability of error does not vanish for sufficiently large number of users and items. One important consequence of this result is that exploiting the hierarchical structure of social graphs yields a substantial gain in sample complexity relative to the one that simply identifies different groups without resorting to the relational structure across them. More specifically, we analyze the optimal sample complexity and identify different regimes whose characteristics rely on quality metrics of side information of the hierarchical similarity graph. Finally, we present simulation results to corroborate our theoretical findings and show that the characterized information-theoretic limit can be asymptotically achieved.
We study the algebraic and geometric properties of stated skein algebras of surfaces with punctured boundary. We prove that the skein algebra of the bigon is isomorphic to the quantum group Oq2(SL(2)){\mathcal O}_{q^2}(\mathrm{SL}(2)) providing a topological interpretation for its structure morphisms. We also show that its stated skein algebra lifts in a suitable sense the Reshetikhin-Turaev functor and in particular we recover the dual RR-matrix for Oq2(SL(2)){\mathcal O}_{q^2}(\mathrm{SL}(2)) in a topological way. We deduce that the skein algebra of a surface with nn boundary components is an algebra-comodule over Oq2(SL(2))n{\mathcal O}_{q^2}(\mathrm{SL}(2))^{\otimes{n}} and prove that cutting along an ideal arc corresponds to Hochshild cohomology of bicomodules. We give a topological interpretation of braided tensor product of stated skein algebras of surfaces as "glueing on a triangle"; then we recover topologically some braided bialgebras in the category of Oq2(SL(2)){\mathcal O}_{q^2}(\mathrm{SL}(2))-comodules, among which the "transmutation" of Oq2(SL(2)){\mathcal O}_{q^2}(\mathrm{SL}(2)). We also provide an operadic interpretation of stated skein algebras as an example of a "geometric non symmetric modular operad". In the last part of the paper we define a reduced version of stated skein algebras and prove that it allows to recover Bonahon-Wong's quantum trace map and interpret skein algebras in the classical limit when q1q\to 1 as regular functions over a suitable version of moduli spaces of twisted bundles.
Recent measurements of the Sunyaev-Zel'dovich (SZ) angular power spectrum from the South Pole Telescope (SPT) and the Atacama Cosmology Telescope (ACT) demonstrate the importance of understanding baryon physics when using the SZ power spectrum to constrain cosmology. This is challenging since roughly half of the SZ power at l=3000 is from low-mass systems with 10^13 h^-1 M_sun < M_500 < 1.5x10^14 h^-1 M_sun, which are more difficult to study than systems of higher mass. We present a study of the thermal pressure content for a sample of local galaxy groups from Sun et al. (2009). The group Y_{sph, 500} - M_500 relation agrees with the one for clusters derived by Arnaud et al. (2010). The group median pressure profile also agrees with the universal pressure profile for clusters derived by Arnaud et al. (2010). With this in mind, we briefly discuss several ways to alleviate the tension between the measured low SZ power and the predictions from SZ templates.
Many astrophysical disks, such as protoplanetary disks, are in a regime where non-ideal, plasma-specific magnetohydrodynamic (MHD) effects can significantly influence the behavior of the magnetorotational instability (MRI). The possibility of studying these effects in the Plasma Couette Experiment (PCX) is discussed. An incompressible, dissipative global stability analysis is developed to include plasma-specific two-fluid effects and neutral collisions, which are inherently absent in analyses of Taylor-Couette flows (TCFs) in liquid metal experiments. It is shown that with boundary driven flows, a ion-neutral collision drag body force significantly affects the azimuthal velocity profile, thus limiting the flows to regime where the MRI is not present. Electrically driven flow (EDF) is proposed as an alternative body force flow drive in which the MRI can destabilize at more easily achievable plasma parameters. Scenarios for reaching MRI relevant parameter space and necessary hardware upgrades are described.
In this article, we provide explicit bounds for the prime counting function θ(x)\theta(x) in all ranges of xx. The bounds for the error term for θ(x)x\theta (x)- x are of the shape ϵx\epsilon x and ckx(logx)k\frac{c_k x}{(\log x)^k}, for k=1,,5k=1,\ldots,5. Tables of values for ϵ\epsilon and ckc_k are provided.
The critical group of a graph is a finite abelian group whose order is the number of spanning forests of the graph. This paper provides three basic structural results on the critical group of a line graph. The first deals with connected graphs containing no cut-edge. Here the number of independent cycles in the graph, which is known to bound the number of generators for the critical group of the graph, is shown also to bound the number of generators for the critical group of its line graph. The second gives, for each prime p, a constraint on the p-primary structure of the critical group, based on the largest power of p dividing all sums of degrees of two adjacent vertices. The third deals with connected graphs whose line graph is regular. Here known results relating the number of spanning trees of the graph and of its line graph are sharpened to exact sequences which relate their critical groups. The first two results interact extremely well with the third. For example, they imply that in a regular nonbipartite graph, the critical group of the graph and that of its line graph determine each other uniquely in a simple fashion.
The input in the Minimum-Cost Constraint Satisfaction Problem (MinCSP) over the Point Algebra contains a set of variables, a collection of constraints of the form x &lt; y, x=yx = y, xyx \leq y and xyx \neq y, and a budget kk. The goal is to check whether it is possible to assign rational values to the variables while breaking constraints of total cost at most kk. This problem generalizes several prominent graph separation and transversal problems: MinCSP(&lt;) is equivalent to Directed Feedback Arc Set, MinCSP(&lt;,\leq) is equivalent to Directed Subset Feedback Arc Set, MinCSP(=,)(=,\neq) is equivalent to Edge Multicut, and MinCSP(,)(\leq,\neq) is equivalent to Directed Symmetric Multicut. Apart from trivial cases, MinCSP(Γ)(\Gamma) for \Gamma \subseteq \{&lt;,=,\leq,\neq\} is NP-hard even to approximate within any constant factor under the Unique Games Conjecture. Hence, we study parameterized complexity of this problem under a natural parameterization by the solution cost kk. We obtain a complete classification: if \Gamma \subseteq \{&lt;,=,\leq,\neq\} contains both \leq and \neq, then MinCSP(Γ)(\Gamma) is W[1]-hard, otherwise it is fixed-parameter tractable. For the positive cases, we solve MinCSP(&lt;,=,\neq), generalizing the FPT results for Directed Feedback Arc Set and Edge Multicut as well as their weighted versions. Our algorithm works by reducing the problem into a Boolean MinCSP, which is in turn solved by flow augmentation. For the lower bounds, we prove that Directed Symmetric Multicut is W[1]-hard, solving an open problem.
There are no more papers matching your filters at the moment.