Photo-Realistic Representation of Anatomical Structures for Medical Education by Fusion of Volumetric and Surface Image Data
We have produced improved photo-realistic views of anatomical structures for medical education combining data from photographic images of anatomical surfaces with optical, CT and MRI volumetric data such as provided by the NLM Visible Human Project. Volumetric data contains the information needed to construct 3D geometrical models of anatomical structures, but cannot provide a realistic appearance for surfaces. Nieder has captured high quality photographic sequences of anatomy specimens over a range of rotational angles. These have been assembled into QuickTime VR Object movies that can be viewed statically or dynamically. We reuse this surface imagery to produce textures and surface reflectance maps for 3D anatomy models to allow viewing from any orientation and lighting condition. Because the volumetric data comes from different individuals than the surface images, we have to warp these data into alignment. Currently we do not use structured lighting or other direct 3D surface information, so surface shape is recovered from rotational sequences using silhouettes and texture correlations. The results of this work improves the appearance and generality of models, used for anatomy instruction with the PSC Volume Browser.
Wetzel, A. W.,
Nieder, G. L.,
Gest, T. R.,
Pomerantz, S. M.,
Wagner, L. A.,
& Deerfield, D. W.
(2003). Photo-Realistic Representation of Anatomical Structures for Medical Education by Fusion of Volumetric and Surface Image Data. Applied Imagery Pattern Recognition Workshop, 131-138.