W2F: A Weakly-Supervised to Fully-Supervised Framework for Object Detection

Yongqiang Zhang, Yancheng Bai, Mingli Ding, Yongqiang Li, Bernard Ghanem

Research output: Chapter in Book/Report/Conference proceedingConference contribution

38 Scopus citations

Abstract

Weakly-supervised object detection has attracted much attention lately, since it does not require bounding box annotations for training. Although significant progress has also been made, there is still a large gap in performance between weakly-supervised and fully-supervised object detection. Recently, some works use pseudo ground-truths which are generated by a weakly-supervised detector to train a supervised detector. Such approaches incline to find the most representative parts of objects, and only seek one ground-truth box per class even though many same-class instances exist. To overcome these issues, we propose a weakly-supervised to fully-supervised framework, where a weakly-supervised detector is implemented using multiple instance learning. Then, we propose a pseudo ground-truth excavation (PGE) algorithm to find the pseudo ground-truth of each instance in the image. Moreover, the pseudo ground-truth adaptation (PGA) algorithm is designed to further refine the pseudo ground-truths from PGE. Finally, we use these pseudo ground-truths to train a fully-supervised detector. Extensive experiments on the challenging PASCAL VOC 2007 and 2012 benchmarks strongly demonstrate the effectiveness of our framework. We obtain 52.4% and 47.8% mAP on VOC2007 and VOC2012 respectively, a significant improvement over previous state-of-the-art methods.
Original languageEnglish (US)
Title of host publication2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages928-936
Number of pages9
ISBN (Print)9781538664209
DOIs
StatePublished - Dec 18 2018

Fingerprint Dive into the research topics of 'W2F: A Weakly-Supervised to Fully-Supervised Framework for Object Detection'. Together they form a unique fingerprint.

Cite this