Siléane
While 3D object detection and pose estimation has been studied for a long time, its evaluation is not yet completely satisfactory. Indeed, existing datasets typically consist in numerous acquisitions of only a few scenes because of the tediousness of pose annotation, and existing evaluation protocols cannot handle properly objects with symmetries. This work aims at addressing those two points. We first present automatic techniques to produce fully annotated RGBD data of many object instances in arbitrary poses, with which we produce a dataset of thousands of independent scenes of bulk parts composed of both real and synthetic images. We then propose a consistent evaluation methodology suitable for any rigid object, regardless of its symmetries. We illustrate it with two reference object detection and pose estimation methods on different objects, and show that incorporating symmetry considerations into pose estimation methods themselves can lead to significant performance gains. The proposed dataset is available at this http URL
Grasping objects is one of the most important abilities that a robot needs to master in order to interact with its environment. Current state-of-the-art methods rely on deep neural networks trained to jointly predict a graspability score together with a regression of an offset with respect to grasp reference parameters. However, these two predictions are performed independently, which can lead to a decrease in the actual graspability score when applying the predicted offset. Therefore, in this paper, we extend a state-of-the-art neural network with a scorer that evaluates the graspability of a given position, and introduce a novel loss function which correlates regression of grasp parameters with graspability score. We show that this novel architecture improves performance from 82.13% for a state-of-the-art grasp detection network to 85.74% on Jacquard dataset. When the learned model is transferred onto a real robot, the proposed method correlating graspability and grasp regression achieves a 92.4% rate compared to 88.1% for the baseline trained without the correlation.
There are no more papers matching your filters at the moment.