This mark is automatically generated based on current and recent activity in the IV/LAB.

* The number of lab members who have logged in to our linux machines in the past 24 hours is 1.
* The number of lab members who have logged in to our linux machines this week is 1.
* The current size of the directory for our published research papers is 24249631 KB.
* The lights in the lab are on.

Poster: A User Study to Understand Motion Visualization in Virtual Reality

Research Publication

Poster: A User Study to Understand Motion Visualization in Virtual Reality. Dane Coffey, Fedor Korsakov, Haleh Hagh-Shenas, Lauren Thorson, Daniel F. Keefe. Virtual Reality Conference, IEEE (2012) Volume 0 pp. 63-64

Abstract

Studies of motion are fundamental to science. For centuries, pictures of motion (e.g., the revolutionary photographs by Marey and Muybridge of galloping horses and other animals, da Vinci's detailed drawings of hydrodynamics) have factored importantly in making scientific discoveries possible. Today, there is perhaps no tool more powerful than interactive virtual reality (VR) for conveying complex space-time data to scientists, doctors, and others; however, relatively little is known about how to design virtual environments in order to best facilitate these analyses. In designing virtual environments for presenting scientific motion data (e.g., 4D data captured via medical imaging or motion tracking) our intuition is most often to "reanimate" these data in VR, displaying moving virtual bones and other 3D structures in virtual space as if the viewer were watching the data being collected in a biomechanics lab. However, recent research in other contexts suggests that although animated displays are effective forepresenting known trends, static displays are more effective for data analysis. Applied to the problem of analyzing motion, it could well be the case that VR environments that freeze time, for example, using depictions of motion inspired by the traditional stroboscopic photography of Marey, could enhance users' abilities to accurately analyze motion. Also, outside of virtual reality, it has been shown for scenes that are complex spatially but do not include motion, that various combinations of automatic camera control, static imagery, and user interaction dramatically affect the utility of 3D data visualizations. Thus, as we strive to harness the power of virtual reality as a data analysis tool, fundamental questions remain as to how to best visually depict and interact with motion datasets, especially in situations where the data require intricate analysis of both spatial and temporal relationships. To investigate the space of visualization design permutations we introduce a taxonomy of fundamental design variables for depicting these data. Based in this taxonomy, a formal user experiment is presented that evaluates the strengths and weaknesses of each combination. Finally, based on the results of this experiment and our own insights from iterative visualization development, we present a set of guidelines for designing virtual environments to depict motion.

Video
Projects
Links