Technical University of Vienna
A vision-based system called GraspCheckNet uses head-mounted cameras and a two-stage architecture to verify successful robotic grasps, especially for deformable objects. The system achieves effective sim2real transfer by training on the HSR-GraspSynth synthetic dataset, demonstrating high accuracy in detecting grasped objects and identifying empty grippers in real-world scenarios.
Leveraging the vast genetic diversity within microbiomes offers unparalleled insights into complex phenotypes, yet the task of accurately predicting and understanding such traits from genomic data remains challenging. We propose a framework taking advantage of existing large models for gene vectorization to predict habitat specificity from entire microbial genome sequences. Based on our model, we develop attribution techniques to elucidate gene interaction effects that drive microbial adaptation to diverse environments. We train and validate our approach on a large dataset of high quality microbiome genomes from different habitats. We not only demonstrate solid predictive performance, but also how sequence-level information of entire genomes allows us to identify gene associations underlying complex phenotypes. Our attribution recovers known important interaction networks and proposes new candidates for experimental follow up.
Do the laws of quantum physics still hold for macroscopic objects - this is at the heart of Schrödinger's cat paradox - or do gravitation or yet unknown effects set a limit for massive particles? What is the fundamental relation between quantum physics and gravity? Ground-based experiments addressing these questions may soon face limitations due to limited free-fall times and the quality of vacuum and microgravity. The proposed mission MAQRO may overcome these limitations and allow addressing those fundamental questions. MAQRO harnesses recent developments in quantum optomechanics, high-mass matter-wave interferometry as well as state-of-the-art space technology to push macroscopic quantum experiments towards their ultimate performance limits and to open new horizons for applying quantum technology in space. The main scientific goal of MAQRO is to probe the vastly unexplored "quantum-classical" transition for increasingly massive objects, testing the predictions of quantum theory for truly macroscopic objects in a size and mass regime unachievable in ground-based experiments. The hardware for the mission will largely be based on available space technology. Here, we present the MAQRO proposal submitted in response to the (M4) Cosmic Vision call of the European Space Agency for a medium-size mission opportunity with a possible launch in 2025.
We present an analytical model for the Seebeck coefficient S of superlattice materials that explicitly takes into account the energy relaxation due to electron-optical phonon (e-ph) scattering. In such materials, the Seebeck coefficient is not only determined by the bulk Seebeck values of the materials but, in addition, is dependent on the energy relaxation process of charge carriers as they propagate from the less-conductive barrier region into the more-conductive well region. We calculate S as a function of the well size d, where carrier energy becomes increasingly relaxed within the well for d greater than l, where l is the energy relaxation length. We validate the model against more advanced quantum transport simulations based on the nonequilibrium Green function (NEGF) method and also with an experiment, and we find very good agreement. In the case in which no energy relaxation is taken into account, the results deviate substantially from the NEGF results. The model also yields accurate results with only a small deviation (up to ~3%) when varying the optical phonon energy hw or the e-ph coupling strength D0, physical parameters that would determine l. As a first order approximation, the model is valid for nanocomposite materials, and it could prove useful in the identification of material combinations and in the estimation of ideal sizes in the design of nanoengineered thermoelectric materials with enhanced power factor performance.
Conway hemirings are Conway semirings without a multiplicative unit. We also define iteration hemirings as Conway hemirings satisfying certain identities associated with the finite groups. Iteration hemirings are iteration semirings without a multiplicative unit. We provide an analysis of the relationship between Conway hemirings and (partial) Conway semirings and describe several free constructions. In the second part of the paper we define and study hemimodules of Conway and iteration hemirings, and show their applicability in the analysis of quantitative aspects of the infinitary behavior of weighted transition systems. These include discounted and average computations of weights.
Conway hemirings are Conway semirings without a multiplicative unit. We also define iteration hemirings as Conway hemirings satisfying certain identities associated with the finite groups. Iteration hemirings are iteration semirings without a multiplicative unit. We provide an analysis of the relationship between Conway hemirings and (partial) Conway semirings and describe several free constructions. In the second part of the paper we define and study hemimodules of Conway and iteration hemirings, and show their applicability in the analysis of quantitative aspects of the infinitary behavior of weighted transition systems. These include discounted and average computations of weights.
We present a method to approximate post-Hartree-Fock correlation energies by using approximate natural orbitals obtained by the random phase approximation (RPA). We demonstrate the method by applying it to the helium atom, the hydrogen and fluorine molecule, and to diamond as an example of a periodic system. For these benchmark systems, we show that RPA natural orbitals converge the MP2 correlation energy rapidly. Additionally, we calculated full configuration interaction energies for He and H2_2, which are in excellent agreement with the literature and experimental values. We conclude that the proposed method may serve as a compromise to reach good approximations to correlation energies at moderate computational cost, and we expect the method to be especially useful for theoretical studies on surface chemistry by providing an efficient basis to correlated wave function based methods.
We present a first measurement study on the adoption and actual privacy of two popular decentralized CoinJoin implementations, Wasabi and Samourai, in the broader Bitcoin ecosystem. By applying highly accurate (> 99%) algorithms we can effectively detect 30,251 Wasabi and 223,597 Samourai transactions within the block range 530,500 to 725,348 (2018-07-05 to 2022-02-28). We also found a steady adoption of these services with a total value of mixed coins of ca. 4.74 B USD and average monthly mixing amounts of ca. 172.93 M USD) for Wasabi and ca. 41.72 M USD for Samourai. Furthermore, we could trace ca. 322 M USD directly received by cryptoasset exchanges and ca. 1.16 B USD indirectly received via two hops. Our analysis further shows that the traceability of addresses during the pre-mixing and post-mixing narrows down the anonymity set provided by these coin mixing services. It also shows that the selection of addresses for the CoinJoin transaction can harm anonymity. Overall, this is the first paper to provide a comprehensive picture of the adoption and privacy of distributed CoinJoin transactions. Understanding this picture is particularly interesting in the light of ongoing regulatory efforts that will, on the one hand, affect compliance measures implemented in cryptocurrency exchanges and, on the other hand, the privacy of end-users.
A planetary microlensing signal is generally characterized by a short-term perturbation to the standard single lensing light curve. A subset of binary-source events can produce perturbations that mimic planetary signals, thereby introducing an ambiguity between the planetary and binary-source interpretations. In this paper, we present analysis of the microlensing event MOA-2012-BLG-486, for which the light curve exhibits a short-lived perturbation. Routine modeling not considering data taken in different passbands yields a best-fit planetary model that is slightly preferred over the best-fit binary-source model. However, when allowed for a change in the color during the perturbation, we find that the binary-source model yields a significantly better fit and thus the degeneracy is clearly resolved. This event not only signifies the importance of considering various interpretations of short-term anomalies, but also demonstrates the importance of multi-band data for checking the possibility of false-positive planetary signals.
Background:Technical systems are growing in complexity with more components and functions across various disciplines. Model-Driven Engineering (MDE) helps manage this complexity by using models as key artifacts. Domain-Specific Languages (DSL) supported by MDE facilitate modeling. As data generation in product development increases, there's a growing demand for AI algorithms, which can be challenging to implement. Integrating AI algorithms with DSL and MDE can streamline this process. Objective:This study aims to investigate the existing model-driven approaches relying on DSL in support of the engineering of AI software systems to sharpen future research further and define the current state of the art. Method:We conducted a Systemic Literature Review (SLR), collecting papers from five major databases resulting in 1335 candidate studies, eventually retaining 18 primary studies. Each primary study will be evaluated and discussed with respect to the adoption of MDE principles and practices and the phases of AI development support aligned with the stages of the CRISP-DM methodology. Results:The study's findings show that language workbenches are of paramount importance in dealing with all aspects of modeling language development and are leveraged to define DSL explicitly addressing AI concerns. The most prominent AI-related concerns are training and modeling of the AI algorithm, while minor emphasis is given to the time-consuming preparation of the data. Early project phases that support interdisciplinary communication of requirements, e.g., CRISP-DM Business Understanding phase, are rarely reflected. Conclusion:The study found that the use of MDE for AI is still in its early stages, and there is no single tool or method that is widely used. Additionally, current approaches tend to focus on specific stages of development rather than providing support for the entire development process.
Brain development in the first few months of human life is a critical phase characterized by rapid structural growth and functional organization. Accurately predicting developmental outcomes during this time is crucial for identifying delays and enabling timely interventions. This study introduces the SwiFT (Swin 4D fMRI Transformer) model, designed to predict Bayley-III composite scores using neonatal fMRI from the Developing Human Connectome Project (dHCP). To enhance predictive accuracy, we apply dimensionality reduction via group independent component analysis (ICA) and pretrain SwiFT on large adult fMRI datasets to address the challenges of limited neonatal data. Our analysis shows that SwiFT significantly outperforms baseline models in predicting cognitive, motor, and language outcomes, leveraging both single-label and multi-label prediction strategies. The model's attention-based architecture processes spatiotemporal data end-to-end, delivering superior predictive performance. Additionally, we use Integrated Gradients with Smoothgrad sQuare (IG-SQ) to interpret predictions, identifying neural spatial representations linked to early cognitive and behavioral development. These findings underscore the potential of Transformer models to advance neurodevelopmental research and clinical practice.
Filter pruning of a CNN is typically achieved by applying discrete masks on the CNN's filter weights or activation maps, post-training. Here, we present a new filter-importance-scoring concept named pruning by active attention manipulation (PAAM), that sparsifies the CNN's set of filters through a particular attention mechanism, during-training. PAAM learns analog filter scores from the filter weights by optimizing a cost function regularized by an additive term in the scores. As the filters are not independent, we use attention to dynamically learn their correlations. Moreover, by training the pruning scores of all layers simultaneously, PAAM can account for layer inter-dependencies, which is essential to finding a performant sparse sub-network. PAAM can also train and generate a pruned network from scratch in a straightforward, one-stage training process without requiring a pre-trained network. Finally, PAAM does not need layer-specific hyperparameters and pre-defined layer budgets, since it can implicitly determine the appropriate number of filters in each layer. Our experimental results on different network architectures suggest that PAAM outperforms state-of-the-art structured-pruning methods (SOTA). On CIFAR-10 dataset, without requiring a pre-trained baseline network, we obtain 1.02% and 1.19% accuracy gain and 52.3% and 54% parameters reduction, on ResNet56 and ResNet110, respectively. Similarly, on the ImageNet dataset, PAAM achieves 1.06% accuracy gain while pruning 51.1% of the parameters on ResNet50. For Cifar-10, this is better than the SOTA with a margin of 9.5% and 6.6%, respectively, and on ImageNet with a margin of 11%.
We measure the two-dimensional elastic modulus E2DE^\text{2D} of atomically clean defect-engineered graphene with a known defect distribution and density in correlated ultra-high vacuum experiments. The vacancies are introduced via low-energy (< 200 eV) Ar ion irradiation and the atomic structure is obtained via semi-autonomous scanning transmission electron microscopy and image analysis. Based on atomic force microscopy nanoindentation measurements, a decrease of E2DE^\text{2D} from 286 to 158 N/m is observed when measuring the same graphene membrane before and after an ion irradiation-induced vacancy density of 1.0×10131.0\times 10^{13} cm2^{-2}. This decrease is significantly greater than what is predicted by most theoretical studies and in stark contrast to some measurements presented in the literature. With the assistance of atomistic simulations, we show that this softening is mostly due to corrugations caused by local strain at vacancies with two or more missing atoms, while the influence of single vacancies is negligible. We further demonstrate that the opposite effect can be measured when surface contamination is not removed before defect engineering
The Stochastic Burgers Equation (SBE) is a singular, non-linear Stochastic Partial Differential Equation (SPDE) that describes, on mesoscopic scales, the fluctuations of stochastic driven diffusive systems with a conserved scalar quantity. In space dimension d = 2, the SBE is critical, being formally scale invariant under diffusive scaling. As such, it falls outside of the domain of applicability of the theories of Regularity Structures and paracontrolled calculus. In apparent contrast with the formal scale invariance, we fully prove the conjecture first appeared in [H. van Beijeren, R. Kutner, & H. Spohn, Phys. Rev. Lett., 1986] according to which the 2d-SBE is logarithmically superdiffusive, i.e. its diffusion coefficient diverges like (logt)2/3(\log t)^{2/3} as tt\to\infty, thus removing subleading diverging multiplicative corrections in [D. De Gaspari & L. Haunschmid-Sibitz, Electron. J. Probab., 2024] and in [H.-T. Yau, Ann. of Math., 2004] for 2d-ASEP. We precisely identify the constant prefactor of the logarithm and show it is proportional to λ4/3\lambda^{4/3}, for λ>0\lambda>0 the coupling constant, which, intriguingly, turns out to be exactly the same as for the one-dimensional Stochastic Burgers/KPZ equation. More importantly, we prove that, under super-diffusive space-time rescaling, the SBE has an explicit Gaussian fixed point in the Renormalization Group sense, by deriving a superdiffusive central limit-type theorem for its solution. This is the first scaling limit result for a critical singular SPDE, beyond the weak coupling regime, and is obtained via a refined control, on all length-scales, of the resolvent of the generator of the SBE. We believe our methods are well-suited to study other out-of-equilibrium driven diffusive systems at the critical dimension, such as 2d-ASEP, which, we conjecture, have the same large-scale Fixed Point as SBE.
We study the dynamics of gravitationally collapsing massive shells in AdS spacetime, and show in detail how one can determine extremal surfaces traversing them. The results are used to solve the time evolution of the holographic entanglement entropy in strongly coupled dual conformal gauge theory, which is is seen to exhibit a regime of linear growth independent of the shape of the boundary entangling region and the equation of state of the shell. Our exact results are finally compared to those of two commonly used approximation schemes, the Vaidya metric and the quasistatic limit, whose respective regions of validity are quantitatively determined.
In the past decades many density-functional theory methods and codes adopting periodic boundary conditions have been developed and are now extensively used in condensed matter physics and materials science research. Only in 2016, however, their precision (i.e., to which extent properties computed with different codes agree among each other) was systematically assessed on elemental crystals: a first crucial step to evaluate the reliability of such computations. We discuss here general recommendations for verification studies aiming at further testing precision and transferability of density-functional-theory computational approaches and codes. We illustrate such recommendations using a greatly expanded protocol covering the whole periodic table from Z=1 to 96 and characterizing 10 prototypical cubic compounds for each element: 4 unaries and 6 oxides, spanning a wide range of coordination numbers and oxidation states. The primary outcome is a reference dataset of 960 equations of state cross-checked between two all-electron codes, then used to verify and improve nine pseudopotential-based approaches. Such effort is facilitated by deploying AiiDA common workflows that perform automatic input parameter selection, provide identical input/output interfaces across codes, and ensure full reproducibility. Finally, we discuss the extent to which the current results for total energies can be reused for different goals (e.g., obtaining formation energies).
A famous theorem of Weyl states that if MM is a compact submanifold of euclidean space, then the volumes of small tubes about MM are given by a polynomial in the radius rr, with coefficients that are expressible as integrals of certain scalar invariants of the curvature tensor of MM with respect to the induced metric. It is natural to interpret this phenomenon in terms of curvature measures and smooth valuations, in the sense of Alesker, canonically associated to the Riemannian structure of MM. This perspective yields a fundamental new structure in Riemannian geometry, in the form of a certain abstract module over the polynomial algebra R[t]\mathbb R[t] that reflects the behavior of Alesker multiplication. This module encodes a key piece of the array of kinematic formulas of any Riemannian manifold on which a group of isometries acts transitively on the sphere bundle. We illustrate this principle in precise terms in the case where MM is a complex space form.
We revisit the classical problem of designing optimally efficient cryptographically secure hash functions. Hash functions are traditionally designed via applying modes of operation on primitives with smaller domains. The results of Shrimpton and Stam (ICALP 2008), Rogaway and Steinberger (CRYPTO 2008), and Mennink and Preneel (CRYPTO 2012) show how to achieve optimally efficient designs of 2n2n-to-nn-bit compression functions from non-compressing primitives with asymptotically optimal 2n/2ϵ2^{n/2-\epsilon}-query collision resistance. Designing optimally efficient and secure hash functions for larger domains (>2n> 2n bits) is still an open problem. In this work we propose the new \textit{compactness} efficiency notion. It allows us to focus on asymptotically optimally collision resistant hash function and normalize their parameters based on Stam's bound from CRYPTO 2008 to obtain maximal efficiency. We then present two tree-based modes of operation -Our first construction is an \underline{A}ugmented \underline{B}inary T\underline{r}ee (ABR) mode. The design is a (2+211)n(2^{\ell}+2^{\ell-1} -1)n-to-nn-bit hash function making a total of (21)(2^{\ell}-1) calls to 2n2n-to-nn-bit compression functions for any 2\ell\geq 2. Our construction is optimally compact with asymptotically (optimal) 2n/2ϵ2^{n/2-\epsilon}-query collision resistance in the ideal model. For a tree of height \ell, in comparison with Merkle tree, the ABRABR mode processes additional (211)(2^{\ell-1}-1) data blocks making the same number of internal compression function calls. -While the ABRABR mode achieves collision resistance, it fails to achieve indifferentiability from a random oracle within 2n/32^{n/3} queries. ABR+ABR^{+} compresses only 11 less data block than ABRABR with the same number of compression calls and achieves in addition indifferentiability up to 2n/2ϵ2^{n/2-\epsilon} queries.
21 Jun 2006
We investigate the structure of the Medvedev lattice as a partial order. We prove that every interval in the lattice is either finite, in which case it is isomorphic to a finite Boolean algebra, or contains an antichain of size 2^2^\aleph_0, the size of the lattice itself. We also prove that it is consistent that the lattice has chains of this size, and in fact that these big chains occur in every interval that has a big antichain. We also study embeddings of lattices and algebras. We show that large Boolean algebras can be embedded into the Medvedev lattice as upper semilattices, but that a Boolean algebra can be embedded as a lattice only if it is countable. Finally we discuss which of these results hold for the closely related Muchnik lattice.
The 3-Decomposition Conjecture states that every connected cubic graph can be decomposed into a spanning tree, a 2-regular subgraph and a matching. We show that this conjecture holds for the class of connected plane cubic graphs.
There are no more papers matching your filters at the moment.