W. Wang, H. Zhu and M. H. Ang, "SGSIN: Simultaneous Grasp and Suction Inference Network via Attention-Based Affordance Learning," in IEEE Transactions on Industrial Electronics, vol. 72, no. 5, pp. 4990-5000, May 2025
Abstract:
Universal handling of a wide and diverse range of objects is a grand and long-lasting challenge for robot manipulation. While recent methods have achieved competitive results in grasping a specific set of objects, it still performs unsatisfactorily when faced with cluttered and diverse objects with different characteristics such as material, shape, texture, etc. One critical bottleneck stems from the limited applicability of single gripper modality, which is not capable of handling the complex real-world tasks. In this article, we propose a unified grasping inference framework for multimodality grasping, i.e., the simultaneous grasp and suction inference network (SGSIN). SGSIN utilizes the 3-D scene input to simultaneously predict feasible grasp poses for multiple modalities of grippers as well as to determine the optimal primitive based on a gripper selection module. SGSIN leverages a novel point-level affordance metric to indicate the potential probability of grasping success for each gripper in a unified manner. A novel backbone network is developed to extract strong 3-D feature representations, where the residual block with point and channel attentions is proposed. Our multimodality framework is trained and evaluated on the GraspNet-1B dataset, and the performances achieve the state of the art on respective grasp and suction tasks. Furthermore, we also evaluate the developed pipeline in real-world tasks with cluttered environments. The state-of-the-art performance demonstrates the effectiveness of our proposed framework.
License type:
Publisher Copyright
Funding Info:
There was no specific funding for the research done