Establishing correspondences from image to 3D has been a key task of 6DoF object pose estimation for a long time. To predict pose more accurately, deeply learned dense maps replaced sparse templates. Dense methods also improved pose estimation in the presence of occlusion. More recently researchers have shown improvements by learning object fragments as segmentation. In this work, we present a discrete descriptor, which can represent the object surface densely. By incorporating a hierarchical binary grouping, we can encode the object surface very efficiently. Moreover, we propose a coarse to fine training strategy, which enables fine-grained correspondence prediction. Finally, by matching predicted codes with object surface and using a PnP solver, we estimate the 6DoF pose. Results on the public LM-O and YCB-V datasets show major improvement over the state of the art w.r.t. ADD(-S) metric, even surpassing RGB-D based methods in some cases.
137
CNRS logoCNRSInstitut Polytechnique de ParisLudwig-Maximilians-Universitunchenat M",Max-Planck-Institute for Medical ResearchTechnische Universit
From the vasculature of animals to the porous media making up batteries, the core task of flow networks is to transport solutes and perfuse all cells or media equally with resources. Yet, living flow networks have a key advantage over porous media: they are adaptive and self-organize their geometry for homogeneous perfusion throughout the network. Here, we show that also artificial flow networks can self-organize toward homogeneous perfusion by the versatile adaption of controlled erosion. Flowing a pulse of cleaving enzyme through a network patterned into an erodible hydrogel, with initial channels disparate in width, we observe a homogenization in channel resistances. Experimental observations are matched with numerical simulations of the diffusion-advection-sorption dynamics of an eroding enzyme within a network. Analyzing transport dynamics theoretically, we show that homogenization only occurs if the pulse of the eroding enzyme lasts longer than the time it takes any channel to equilibrate to the pulse concentration. The equilibration time scale derived analytically is in agreement with simulations. Lastly, we show both numerically and experimentally that erosion leads to the homogenization of complex networks containing loops. Erosion being an omnipresent reaction, our results pave the way for a very versatile self-organized increase in the performance of porous media.
Google Inc.unchenat M",Technische Universit":
We propose a novel model for 3D semantic completion from a single depth image, based on a single encoder and three separate generators used to reconstruct different geometric and semantic representations of the original and completed scene, all sharing the same latent space. To transfer information between the geometric and semantic branches of the network, we introduce paths between them concatenating features at corresponding network layers. Motivated by the limited amount of training samples from real scenes, an interesting attribute of our architecture is the capacity to supplement the existing dataset by generating a new training dataset with high quality, realistic scenes that even includes occlusion and real noise. We build the new dataset by sampling the features directly from latent space which generates a pair of partial volumetric surface and completed volumetric semantic surface. Moreover, we utilize multiple discriminators to increase the accuracy and realism of the reconstructions. We demonstrate the benefits of our approach on standard benchmarks for the two most common completion tasks: semantic 3D scene completion and 3D object completion.
Compositional Zero-Shot learning (CZSL) requires to recognize state-object compositions unseen during training. In this work, instead of assuming prior knowledge about the unseen compositions, we operate in the open world setting, where the search space includes a large number of unseen compositions some of which might be unfeasible. In this setting, we start from the cosine similarity between visual features and compositional embeddings. After estimating the feasibility score of each composition, we use these scores to either directly mask the output space or as a margin for the cosine similarity between visual features and compositional embeddings during training. Our experiments on two standard CZSL benchmarks show that all the methods suffer severe performance degradation when applied in the open world setting. While our simple CZSL model achieves state-of-the-art performances in the closed world scenario, our feasibility scores boost the performance of our approach in the open world setting, clearly outperforming the previous state of the art.
23
Tongji UniversityUniversity of Science and Technology of China logoUniversity of Science and Technology of Chinaunchenat M",Technische Universit":
Predicting the future can significantly improve the safety of intelligent vehicles, which is a key component in autonomous driving. 3D point clouds accurately model 3D information of surrounding environment and are crucial for intelligent vehicles to perceive the scene. Therefore, prediction of 3D point clouds has great significance for intelligent vehicles, which can be utilized for numerous further applications. However, due to point clouds are unordered and unstructured, point cloud prediction is challenging and has not been deeply explored in current literature. In this paper, we propose a novel motion-based neural network named MoNet. The key idea of the proposed MoNet is to integrate motion features between two consecutive point clouds into the prediction pipeline. The introduction of motion features enables the model to more accurately capture the variations of motion information across frames and thus make better predictions for future motion. In addition, content features are introduced to model the spatial content of individual point clouds. A recurrent neural network named MotionRNN is proposed to capture the temporal correlations of both features. Besides, we propose an attention-based motion align module to address the problem of missing motion features in the inference pipeline. Extensive experiments on two large scale outdoor LiDAR datasets demonstrate the performance of the proposed MoNet. Moreover, we perform experiments on applications using the predicted point clouds and the results indicate the great application potential of the proposed method.
15
Deep unsupervised approaches are gathering increased attention for applications such as pathology detection and segmentation in medical images since they promise to alleviate the need for large labeled datasets and are more generalizable than their supervised counterparts in detecting any kind of rare pathology. As the Unsupervised Anomaly Detection (UAD) literature continuously grows and new paradigms emerge, it is vital to continuously evaluate and benchmark new methods in a common framework, in order to reassess the state-of-the-art (SOTA) and identify promising research directions. To this end, we evaluate a diverse selection of cutting-edge UAD methods on multiple medical datasets, comparing them against the established SOTA in UAD for brain MRI. Our experiments demonstrate that newly developed feature-modeling methods from the industrial and medical literature achieve increased performance compared to previous work and set the new SOTA in a variety of modalities and datasets. Additionally, we show that such methods are capable of benefiting from recently developed self-supervised pre-training algorithms, further increasing their performance. Finally, we perform a series of experiments in order to gain further insights into some unique characteristics of selected models and datasets. Our code can be found under this https URL
Climate change is increasing the occurrence of extreme precipitation events, threatening infrastructure, agriculture, and public safety. Ensemble prediction systems provide probabilistic forecasts but exhibit biases and difficulties in capturing extreme weather. While post-processing techniques aim to enhance forecast accuracy, they rarely focus on precipitation, which exhibits complex spatial dependencies and tail behavior. Our novel framework leverages graph neural networks to post-process ensemble forecasts, specifically modeling the extremes of the underlying distribution. This allows to capture spatial dependencies and improves forecast accuracy for extreme events, thus leading to more reliable forecasts and mitigating risks of extreme precipitation and flooding.
Chalmers University of Technology logoChalmers University of TechnologyUniversitat Aut`onoma de BarcelonaFederal University of CearunchenHuawei Technologies Duesseldorf GmbHat M",a",Technische Universit":
In this work, we study optimal transmit strategies for minimizing the positioning error bound in a line-of-sight scenario, under different levels of prior knowledge of the channel parameters. For the case of perfect prior knowledge, we prove that two beams are optimal, and determine their beam directions and optimal power allocation. For the imperfect prior knowledge case, we compute the optimal power allocation among the beams of a codebook for two different robustness-related objectives, namely average or maximum squared position error bound minimization. Our numerical results show that our low-complexity approach can outperform existing methods that entail higher signaling and computational overhead.
16 Sep 2022
For the first time, a nonlinear interface problem on an unbounded domain with nonmonotone set-valued transmission conditions is analyzed. The investigated problem involves a nonlinear monotone partial differential equation in the interior domain and the Laplacian in the exterior domain. Such a scalar interface problem models nonmonotone frictional contact of elastic infinite media. The variational formulation of the interface problem leads to a hemivariational inequality, which lives on the unbounded domain, and so cannot be treated numerically in a direct way. By boundary integral methods the problem is transformed and a novel hemivariational inequality (HVI) is obtained that lives on the interior domain and on the coupling boundary, only. Thus for discretization the coupling of finite elements and boundary elements is the method of choice. In addition smoothing techniques of nondifferentiable optimization are adapted and the nonsmooth part in the HVI is regularized. Thus we reduce the original variational problem to a finite dimensional problem that can be solved by standard optimization tools. We establish not only convergence results for the total approximation procedure, but also an asymptotic error estimate for the regularized HVI.
The ability of having a sparse representation for a certain class of signals has many applications in data analysis, image processing, and other research fields. Among sparse representations, the cosparse analysis model has recently gained increasing interest. Many signals exhibit a multidimensional structure, e.g. images or three-dimensional MRI scans. Most data analysis and learning algorithms use vectorized signals and thereby do not account for this underlying structure. The drawback of not taking the inherent structure into account is a dramatic increase in computational cost. We propose an algorithm for learning a cosparse Analysis Operator that adheres to the preexisting structure of the data, and thus allows for a very efficient implementation. This is achieved by enforcing a separable structure on the learned operator. Our learning algorithm is able to deal with multidimensional data of arbitrary order. We evaluate our method on volumetric data at the example of three-dimensional MRI scans.
University of Amsterdam logoUniversity of AmsterdamStockholm University logoStockholm UniversityUniversidade Estadual Paulistaunchenorica UAM/CSICat M",Instituto de F ```sica Te",Technische Universit":
The ubiquitous presence of dark matter in the universe is today a central tenet in modern cosmology and astrophysics. Ranging from the smallest galaxies to the observable universe, the evidence for dark matter is compelling in dwarfs, spiral galaxies, galaxy clusters as well as at cosmological scales. However, it has been historically difficult to pin down the dark matter contribution to the total mass density in the Milky Way, particularly in the innermost regions of the Galaxy and in the solar neighbourhood. Here we present an up-to-date compilation of Milky Way rotation curve measurements, and compare it with state-of-the-art baryonic mass distribution models. We show that current data strongly disfavour baryons as the sole contribution to the galactic mass budget, even inside the solar circle. Our findings demonstrate the existence of dark matter in the inner Galaxy while making no assumptions on its distribution. We anticipate that this result will compel new model-independent constraints on the dark matter local density and profile, thus reducing uncertainties on direct and indirect dark matter searches, and will shed new light on the structure and evolution of the Galaxy.
We present baryon acoustic oscillation (BAO) scale measurements determined from the clustering of 1.2 million massive galaxies with redshifts 0.2 < z < 0.75 distributed over 9300 square degrees, as quantified by their redshift-space correlation function. In order to facilitate these measurements, we define, describe, and motivate the selection function for galaxies in the final data release (DR12) of the SDSS III Baryon Oscillation Spectroscopic Survey (BOSS). This includes the observational footprint, masks for image quality and Galactic extinction, and weights to account for density relationships intrinsic to the imaging and spectroscopic portions of the survey. We simulate the observed systematic trends in mock galaxy samples and demonstrate that they impart no bias on baryon acoustic oscillation (BAO) scale measurements and have a minor impact on the recovered statistical uncertainty. We measure transverse and radial BAO distance measurements in 0.2 < z < 0.5, 0.5 < z < 0.75, and (overlapping) 0.4 < z < 0.6 redshift bins. In each redshift bin, we obtain a precision that is 2.7 per cent or better on the radial distance and 1.6 per cent or better on the transverse distance. The combination of the redshift bins represents 1.8 per cent precision on the radial distance and 1.1 per cent precision on the transverse distance. This paper is part of a set that analyses the final galaxy clustering dataset from BOSS. The measurements and likelihoods presented here are combined with others in Alam et al. (2016) to produce the final cosmological constraints from BOSS.
We analyse the broad-range shape of the monopole and quadrupole correlation functions of the BOSS Data Release 12 (DR12) CMASS and LOWZ galaxy sample to obtain constraints on the Hubble expansion rate H(z)H(z), the angular-diameter distance DA(z)D_A(z), the normalised growth rate f(z)σ8(z)f(z)\sigma_8(z), and the physical matter density Ωmh2\Omega_mh^2. We adopt wide and flat priors on all model parameters in order to ensure the results are those of a `single-probe' galaxy clustering analysis. We also marginalise over three nuisance terms that account for potential observational systematics affecting the measured monopole. However, such Monte Carlo Markov Chain analysis is computationally expensive for advanced theoretical models, thus we develop a new methodology to speed up our analysis. We obtain {DA(z)rs,fid/rs\{D_A(z)r_{s,fid}/r_sMpc, H(z)rs/rs,fidH(z)r_s/r_{s,fid}kms1^{-1}Mpc1^{-1}, f(z)σ8(z)f(z)\sigma_8(z), Ωmh2}\Omega_m h^2\} = {956±28\{956\pm28 , 75.0±4.075.0\pm4.0 , 0.397±0.0730.397 \pm 0.073, 0.143±0.017}0.143\pm0.017\} at z=0.32z=0.32 and {1421±23\{1421\pm23, 96.7±2.796.7\pm2.7 , 0.497±0.0580.497 \pm 0.058, 0.137±0.015}0.137\pm0.015\} at z=0.59z=0.59 where rsr_s is the comoving sound horizon at the drag epoch and rs,fid=147.66r_{s,fid}=147.66Mpc for the fiducial cosmology in this study. In addition, we divide the galaxy sample into four redshift bins to increase the sensitivity of redshift evolution. However, we do not find improvements in terms of constraining dark energy model parameters. Combining our measurements with Planck data, we obtain Ωm=0.306±0.009\Omega_m=0.306\pm0.009, H0=67.9±0.7H_0=67.9\pm0.7kms1^{-1}Mpc1^{-1}, and σ8=0.815±0.009\sigma_8=0.815\pm0.009 assuming Λ\LambdaCDM; Ωk=0.000±0.003\Omega_k=0.000\pm0.003 assuming oCDM; w=1.01±0.06w=-1.01\pm0.06 assuming wwCDM; and w0=0.95±0.22w_0=-0.95\pm0.22 and wa=0.22±0.63w_a=-0.22\pm0.63 assuming w0waw_0w_aCDM. Our results show no tension with the flat Λ\LambdaCDM cosmological paradigm. This paper is part of a set that analyses the final galaxy clustering dataset from BOSS.
University of Toronto logoUniversity of TorontoUniversity of Waterloo logoUniversity of WaterlooSingapore University of Technology and DesignunchenIMDEA Softwareat M",Technische Universit":
Despite numerous countermeasures proposed by practitioners and researchers, remote control-flow alteration of programs with memory-safety vulnerabilities continues to be a realistic threat. Guaranteeing that complex software is completely free of memory-safety vulnerabilities is extremely expensive. Probabilistic countermeasures that depend on random secret keys are interesting, because they are an inexpensive way to raise the bar for attackers who aim to exploit memory-safety vulnerabilities. Moreover, some countermeasures even support legacy systems. However, it is unclear how to quantify and compare the effectiveness of different probabilistic countermeasures or combinations of such countermeasures. In this paper we propose a methodology to rigorously derive security bounds for probabilistic countermeasures. We argue that by representing security notions in this setting as events in probabilistic games, similarly as done with cryptographic security definitions, concrete and asymptotic guarantees can be obtained against realistic attackers. These guarantees shed light on the effectiveness of single countermeasures and their composition and allow practitioners to more precisely gauge the risk of an attack.
unchenCISPA, Saarland Universityat M",Technische Universit":
We consider the automatic verification of information flow security policies of web-based workflows, such as conference submission systems like EasyChair. Our workflow description language allows for loops, non-deterministic choice, and an unbounded number of participating agents. The information flow policies are specified in a temporal logic for hyperproperties. We show that the verification problem can be reduced to the satisfiability of a formula of first-order linear-time temporal logic, and provide decidability results for relevant classes of workflows and specifications. We report on experimental results obtained with an implementation of our approach on a series of benchmarks.
ETH Zürich logoETH ZürichurichUniversity of Konstanzonoma de MadridBayerische Akademie der Wissenschaftenunchenat M",Universidad Aut",Technische Universit":
Converting angular momentum between different degrees of freedom within a magnetic material results from a dynamic interplay between electrons, magnons and phonons. This interplay is pivotal to implementing spintronic device concepts that rely on spin angular momentum transport. We establish a new concept for long-range angular momentum transport that further allows to address and isolate the magnonic contribution to angular momentum transport in a nanostructured metallic ferromagnet. To this end, we electrically excite and detect spin transport between two parallel and electrically insulated ferromagnetic metal strips on top of a diamagnetic substrate. Charge-to-spin current conversion within the ferromagnetic strip generates electronic spin angular momentum that is transferred to magnons via electron-magnon coupling. We observe a finite angular momentum flow to the second ferromagnetic strip across a diamagnetic substrate over micron distances, which is electrically detected in the second strip by the inverse charge-to-spin current conversion process. We discuss phononic and dipolar interactions as the likely cause to transfer angular momentum between the two strips. Moreover, our work provides the experimental basis to separate the electronic and magnonic spin transport and thereby paves the way towards magnonic device concepts that do not rely on magnetic insulators.
The so-called transition discs provide an important tool to probe various mechanisms that might influence the evolution of protoplanetary discs and therefore the formation of planetary systems. One of these mechanisms is photoevaporation due to energetic radiation from the central star, which can in principal explain the occurrence of discs with inner cavities like transition discs. Current models, however, fail to reproduce a subset of the observed transition discs, namely objects with large measured cavities and vigorous accretion. For these objects the presence of (multiple) giant planets is often invoked to explain the observations. In our work we explore the possibility of X-ray photoevaporation operating in discs with different gas-phase depletion of carbon and show that the influence of photoevaporation can be extended in such low-metallicity discs. As carbon is one of the main contributors to the X-ray opacity, its depletion leads to larger penetration depths of X-rays in the disc and results in higher gas temperatures and stronger photoevaporative winds. We present radiation-hydrodynamical models of discs irradiated by internal X-ray+EUV radiation assuming Carbon gas-phase depletions by factors of 3,10 and 100 and derive realistic mass-loss rates and profiles. Our analysis yields robust temperature prescriptions as well as photoevaporative mass-loss rates and profiles which may be able to explain a larger fraction of the observed diversity of transition discs.
We study spatial photoluminescence characteristics of neutral and charged excitons across extended monolayer MoS2_2 synthesized by chemical vapor deposition. Using two-dimensional hyperspectral photoluminescence mapping at cryogenic temperatures we identify regions with increased emission from charged excitons associated with both spin-orbit split valence subbands. Such regions are attributed to unintentional doping at defect sites with excess charge that bind neutral excitons to form defect-pinned trions. Our findings imply comparable timescales for the formation, relaxation, and radiative decay of BB trions, and add defect-localized AA and BB trions to the realm of photoexcited quasiparticles in layered semiconductors.
Institute of Astronomy and Astrophysics, Academia Sinicaunchenur Astrophysikat M",Technische UniversitMax-Planck-Institut f":":
The redshifts of galaxies are a key attribute that is needed for nearly all extragalactic studies. Since spectroscopic redshifts require additional telescope and human resources, millions of galaxies are known without spectroscopic redshifts. Therefore, it is crucial to have methods for estimating the redshift of a galaxy based on its photometric properties, the so-called photo-zz. We developed NetZ, a new method using a Convolutional Neural Network (CNN) to predict the photo-zz based on galaxy images, in contrast to previous methods which often used only the integrated photometry of galaxies without their images. We use data from the Hyper Suprime-Cam Subaru Strategic Program (HSC SSP) in five different filters as training data. The network over the whole redshift range between 0 and 4 performs well overall and especially in the high-zz range better than other methods on the same data. We obtain an accuracy zpredzref|z_\text{pred}-z_\text{ref}| of σ=0.12\sigma = 0.12 (68% confidence interval) with a CNN working for all galaxy types averaged over all galaxies in the redshift range of 0 to \sim4. By limiting to smaller redshift ranges or to Luminous Red Galaxies (LRGs), we find a further notable improvement. We publish more than 34 million new photo-zz values predicted with NetZ here. This shows that the new method is very simple and fast to apply, and, importantly, covers a wide redshift range limited only by the available training data. It is broadly applicable and beneficial to imaging surveys, particularly upcoming surveys like the Rubin Observatory Legacy Survey of Space and Time which will provide images of billions of galaxies with similar image quality as HSC.
University of Toronto logoUniversity of TorontoStockholm University logoStockholm UniversityEuropean Southern Observatory logoEuropean Southern ObservatoryQueen's University Belfastunchenur Astrophysikat M",Technische UniversitMax-Planck-Institut f":":
We present near infrared (NIR) spectroscopy of the nearby supernova 2014J obtained \sim450 d after explosion. We detect the [Ni II] 1.939 μ\mum line in the spectra indicating the presence of stable 58^{58}Ni in the ejecta. The stable nickel is not centrally concentrated but rather distributed as the iron. The spectra are dominated by forbidden [Fe II] and [Co II] lines. We use lines, in the NIR spectra, arising from the same upper energy levels to place constraints on the extinction from host galaxy dust. We find that that our data are in agreement with the high AVA_V and low RVR_V found in earlier studies from data near maximum light. Using a 56^{56}Ni mass prior from near maximum light γ\gamma-ray observations, we find \sim0.05 M_\odot of stable nickel to be present in the ejecta. We find that the iron group features are redshifted from the host galaxy rest frame by \sim600 km s1^{-1}.
There are no more papers matching your filters at the moment.