UW Interactive Data Lab
IDL logo

Visual Embedding: A Model for Visualization

Çağatay Demiralp, Carlos Scheidegger, Gordon Kindlmann, David Laidlaw, Jeffrey Heer. Computer Graphics and Applications, 2014
Figure for Visual Embedding: A Model for Visualization
Neural tracts colored by visual embedding of shape distances into CIELAB color space.
Materials
Abstract
We propose visual embedding as a model for automatically generating and evaluating visualizations. A visual embedding is a function from data points to a space of visual primitives that measurably preserves structures in the data (domain) within the mapped perceptual space (range). Visual embedding can serve as both a generative and an evaluative model. We demonstrate its use with three examples: coloring of neural tracts, scatter plots with icons, and evaluation of alternative diffusion tensor glyphs. We discuss several techniques for generating visual embedding functions, including probabilistic graphical models for embedding within discrete visual spaces. We also describe two complementary approaches - crowdsourcing and visual product spaces - for building visual spaces with associated perceptual distance measures. Finally, we present future research directions for further developing the visual embedding model.
BibTeX
@article{2014-visual-embedding,
  title = {Visual Embedding: A Model for Visualization},
  author = {Demiralp, \c{C}a\u{g}atay AND Scheidegger, Carlos AND Kindlmann, Gordon AND Laidlaw, David AND Heer, Jeffrey},
  journal = {Computer Graphics and Applications},
  year = {2014},
  url = {https://uwdata.github.io/papers/visual-embedding},
  doi = {10.1109/MCG.2014.18}
}