Task-aligned Part-aware Panoptic Segmentation
through Joint Object-Part Representations
Daan de Geus
Gijs Dubbelman
[Paper]
[GitHub]

Abstract

Part-aware panoptic segmentation (PPS) requires (a) that each foreground object and background region in an image is segmented and classified, and (b) that all parts within foreground objects are segmented, classified and linked to their parent object. Existing methods approach PPS by separately conducting object-level and part-level segmentation. However, their part-level predictions are not linked to individual parent objects. Therefore, their learning objective is not aligned with the PPS task objective, which harms the PPS performance. To solve this, and make more accurate PPS predictions, we propose Task-Aligned Part-aware Panoptic Segmentation (TAPPS). This method uses a set of shared queries to jointly predict (a) object-level segments, and (b) the part-level segments within those same objects. As a result, TAPPS learns to predict part-level segments that are linked to individual parent objects, aligning the learning objective with the task objective, and allowing TAPPS to leverage joint object-part representations. With experiments, we show that TAPPS considerably outperforms methods that predict objects and parts separately, and achieves new state-of-the-art PPS results.


Code

The code is publicily available through the URL below.

 [GitHub]


Paper

D. de Geus and G. Dubbelman.
Task-aligned Part-aware Panoptic Segmentation through Joint Object-Part Representations.
In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.
(PDF)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.