ImageSpirit: Verbal guided image parsing

Ming Ming Cheng*, Shuai Zheng, Wen Yan Lin, Vibhav Vineet, Paul Sturgess, Nigel Crook, Niloy J. Mitra, Philip Torr

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

34 Scopus citations


Humans describe images in terms of nouns and adjectives while algorithms operate on images represented as sets of pixels. Bridging this gap between how humans would like to access images versus their typical representation is the goal of image parsing, which involves assigning object and attribute labels to pixels. In this article we propose treating nouns as object labels and adjectives as visual attribute labels. This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images. We propose an efficient (interactive time) solution. Using the extracted labels as handles, our system empowers a user to verbally refine the results. This enables hands-free parsing of an image into pixel-wise object/attribute labels that correspond to human semantics. Verbally selecting objects of interest enables a novel and natural interaction modality that can possibly be used to interact with new generation devices (e.g., smartphones, Google Glass, livingroom devices). We demonstrate our system on a large number of real-world images with varying complexity. To help understand the trade-offs compared to traditional mouse-based interactions, results are reported for both a large-scale quantitative evaluation and a user study.

Original languageEnglish (US)
Article numbera3
JournalACM Transactions on Graphics
Issue number1
StatePublished - Dec 29 2014


  • Image parsing
  • Image parsing
  • Multilabel CRF
  • Natural language control
  • Object class segmentation
  • Speech interface
  • Visual attributes

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'ImageSpirit: Verbal guided image parsing'. Together they form a unique fingerprint.

Cite this