* equal contributions
† corresponding author
The Architecture overview of our proposed CAP-Net framework. CAP-Net uses pretrained vision backbones, SAM2 and FeatUp, to extract dense semantic features, which are then fused with the point cloud in a point-wise manner. The enriched point cloud features are passed into PointNet++ for further processing. These features are then used by three parallel modules to predict semantic labels, centroid offsets, and NPCS maps. A clustering algorithm groups points with the same semantic label based on centroid distances to isolate each possible part. Finally, an alignment algorithm matches the predicted NPCS map with the real point cloud to estimate each part’s pose and size.
Building on the objects used in the GAPartNet dataset, we incorporate existing resources from PartNet-Mobility that provide URDF models of articulated objects with unified annotations for each part type. Our dataset contains 9 categories of articulated part types, including:line fixed handle, round fixed handle, hinge handle, hinge lid, slider lid, slider button, slider drawer, hinge door and hinge knob. Each instance contains multiple classes of parts. To enhance realism, we place each instance on a flat surface or a desk, simulating typical backgrounds found in household settings. For each instance, we place it in two random backgrounds and, for each background, randomly sample 60 camera views to render RGB-D images. In total, we generate 63K images, complete with semantic labels and pose annotations.
@inproceedings{
}