Prototype-oriented contrastive mean-teacher for unsupervised domain adaptive object detection
Abstrak
Abstract Unsupervised domain adaptive object detection (UDA-OD) aims to deploy a detector trained on source domain(s) to a new, unlabeled target domain. Carrying out mean-teacher self-training for UDA-OD poses a significant challenge, given that its success depends heavily on the quality of pseudo boxes. While many earlier researches have mainly centered on cross-domain transferability, they often neglect the rich intra- and inter-domain semantic structures. As a result, this neglect empirically restricts the discriminative abilities of the learning model. In our study, we have found a notable alignment and synergy across contrastive learning, prototype learning, and mean-teacher self-training. Building on this insight, we introduce the Prototype-oriented C ontrastive Mean Teacher (PoCoMT) for UDA-OD, a thorough and flexible framework that seamlessly integrates these three techniques to extract the most beneficial learning signals. Specifically, PoCoMT firstly generate more diverse and reliable probabilistic outputs from self-training through maximizing information entropy and maintaining semantic consistency; secondly, PoCoMT strives to reduce both intra-domain and inter-domain prototypical contrastive learning losses by elaborately designing a Prototype Alignment Network (ProtoAN) module, which fosters intra-domain feature aggregation, aligns inter-domain class structures, and reduces semantic loss between weak and strong augmentations of target domain data. Our ProtoAN can serve as a plugin module for traditional self-training frameworks to tackle the key problem of semantic loss in UDA-OD. Extensive experiments demonstrate that PoCoMT attains new state-of-the-art performance.
Penulis (4)
Qi Cao
Jianwen Tao
Yufang Dan
Di Zhou
Akses Cepat
- Tahun Terbit
- 2026
- Sumber Database
- DOAJ
- DOI
- 10.1038/s41598-026-44991-7
- Akses
- Open Access ✓