Importance-driven feature enhancement in volume visualization

Ivan Viola*, Armin Kanitsar, M. Eduard Gröller

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

123 Scopus citations

Abstract

This paper presents importance-driven feature enhancement as a technique for the automatic generation of cut-away and ghosted views out of volumetric data. The presented focus+context approach removes or suppresses less important parts of a scene to reveal more important underlying information. However, less important parts are fully visible in those regions, where important visual information is not lost, i.e., more relevant features are not occluded. Features within the volumetric data are first classified according to a new dimension, denoted as object importance. This property determines which structures should be readily discernible and which structures are less important. Next, for each feature, various representations (levels of sparseness) from a dense to a sparse depiction are defined. Levels of sparseness define a spectrum of optical properties or rendering styles. The resulting image is generated by ray-casting and combining the intersected features proportional to their importance (importance compositing). The paper includes an extended discussion on several possible schemes for levels of sparseness specification. Furthermore, different approaches to importance compositing are treated.

Original languageEnglish (US)
Pages (from-to)408-417
Number of pages10
JournalIEEE Transactions on Visualization and Computer Graphics
Volume11
Issue number4
DOIs
StatePublished - Jul 1 2005

Keywords

  • Focus+context techniques
  • Illustrative techniques
  • Level-of-detail techniques
  • View-dependent visualization
  • Volume rendering

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design

Fingerprint Dive into the research topics of 'Importance-driven feature enhancement in volume visualization'. Together they form a unique fingerprint.

Cite this