RefineLoc: Iterative Refinement for Weakly-Supervised Action Localization

Alejandro Pardo, Humam Alwassel, Fabian Caba Heilbron, Ali Kassem Thabet, Bernard Ghanem

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Video action detectors are usually trained using datasets with fully-supervised temporal annotations. Building such datasets is an expensive task. To alleviate this problem, recent methods have tried to leverage weak labeling, where videos are untrimmed and only a video-level label is available. In this paper, we propose RefineLoc, a novel weaklysupervised temporal action localization method. RefineLoc uses an iterative refinement approach by estimating and training on snippet-level pseudo ground truth at every iteration. We show the benefit of this iterative approach and present an extensive analysis of five different pseudo ground truth generators. We show the effectiveness of our model on two standard action datasets, ActivityNet v1.2 and THUMOS14. RefineLoc shows competitive results with the stateof-the-art in weakly-supervised temporal localization. Additionally, our iterative refinement process is able to significantly improve the performance of two state-of-the-art methods, setting a new state-of-the-art on THUMOS14.
Original languageEnglish (US)
Title of host publication2021 IEEE Winter Conference on Applications of Computer Vision (WACV)
ISBN (Print)978-1-6654-4640-2
StatePublished - 2021


Dive into the research topics of 'RefineLoc: Iterative Refinement for Weakly-Supervised Action Localization'. Together they form a unique fingerprint.

Cite this