CAP-Net: A Unified Network for 6D Pose and Size Estimation of Categorical Articulated Parts from a Single RGB-D Image

CVPR 2025

1Fudan University 2Huawei, Noah’s Ark Lab

* equal contributions   corresponding author  

CAP-Net is a unified approach for estimating the 6D pose and size of all articulated parts from RGB-D images, requiring only object-level masks instead of part-level ones. The realistic training images in our RGBD-Art dataset allow this synthetic-trained model to effectively adapt to real-world visual perception tasks for robotic manipulation using an affordable RealSense camera.

Abstract

This paper tackles category-level pose estimation of articulated objects in robotic manipulation tasks and introduces a new benchmark dataset. While recent methods estimate part poses and sizes at the category level, they often rely on geometric cues and complex multi-stage pipelines that first segment parts from the point cloud, followed by Normalized Part Coordinate Space (NPCS) estimation for 6D poses. These approaches overlook dense semantic cues from RGB images, leading to suboptimal accuracy, particularly for objects with small parts. To address these limita- tions, we propose a single-stage Network, CAP-Net, for estimating the 6D poses and sizes of Categorical Articulated Parts. This method combines RGB-D features to generate instance segmentation and NPCS representations for each part in an end-to-end manner. CAP-Net uses a unified network to simultaneously predict point-wise class labels, cen- troid offsets, and NPCS maps. A clustering algorithm then groups points of the same predicted class based on their estimated centroid distances to isolate each part. Finally, the NPCS region of each part is aligned with the point cloud to recover its final pose and size. To bridge the sim-to-real domain gap, we introduce the RGBD-Art dataset, the largest RGB-D articulated dataset to date, featuring photorealistic RGB images and depth noise simulated from real sensors. Experimental evaluations on the RGBD-Art dataset demonstrate that our method significantly outperforms the state-of-the-art approach. Real-world deployments of our model in robotic tasks underscore its robustness and exceptional sim-to-real transfer capabilities, confirming its substantial practical utility.

Method Overview

The Architecture overview of our proposed CAP-Net framework. CAP-Net uses pretrained vision backbones, SAM2 and FeatUp, to extract dense semantic features, which are then fused with the point cloud in a point-wise manner. The enriched point cloud features are passed into PointNet++ for further processing. These features are then used by three parallel modules to predict semantic labels, centroid offsets, and NPCS maps. A clustering algorithm groups points with the same semantic label based on centroid distances to isolate each possible part. Finally, an alignment algorithm matches the predicted NPCS map with the real point cloud to estimate each part’s pose and size.

Exemplars of our RGBD-Art dataset.

Building on the objects used in the GAPartNet dataset, we incorporate existing resources from PartNet-Mobility that provide URDF models of articulated objects with unified annotations for each part type. Our dataset contains 9 categories of articulated part types, including:line fixed handle, round fixed handle, hinge handle, hinge lid, slider lid, slider button, slider drawer, hinge door and hinge knob. Each instance contains multiple classes of parts. To enhance realism, we place each instance on a flat surface or a desk, simulating typical backgrounds found in household settings. For each instance, we place it in two random backgrounds and, for each background, randomly sample 60 camera views to render RGB-D images. In total, we generate 63K images, complete with semantic labels and pose annotations.

Qualatative Results.

Sim to Real video

More Qualatative Results on Datasets

BibTeX

@inproceedings{

}

Acknowledgements