LMVP: Video Predictor with Leaked Motion Information

Dong Wang, Yitong Li, Wei Cao, Liqun Chen, Qi Wei, Lawrence Carin

Research output: Contribution to journalArticlepeer-review

Abstract

We propose a Leaked Motion Video Predictor (LMVP) to predict future frames by capturing the spatial and temporal dependencies from given inputs. The motion is modeled by a newly proposed component, motion guider, which plays the role of both learner and teacher. Specifically, it {\em learns} the temporal features from real data and {\em guides} the generator to predict future frames. The spatial consistency in video is modeled by an adaptive filtering network. To further ensure the spatio-temporal consistency of the prediction, a discriminator is also adopted to distinguish the real and generated frames. Further, the discriminator leaks information to the motion guider and the generator to help the learning of motion. The proposed LMVP can effectively learn the static and temporal features in videos without the need for human labeling. Experiments on synthetic and real data demonstrate that LMVP can yield state-of-the-art results.
Original languageEnglish (US)
JournalArxiv preprint
StatePublished - Jun 24 2019
Externally publishedYes

Keywords

  • cs.CV
  • cs.AI

Fingerprint

Dive into the research topics of 'LMVP: Video Predictor with Leaked Motion Information'. Together they form a unique fingerprint.

Cite this