Publication Date
2014
Document Type
Dissertation
Committee Members
Arthur Goshtasby (Advisor), Lang Hong (Committee Member), Jack Jean (Committee Member), Vincent Schmidt (Committee Member), Thomas Wischgoll (Committee Member)
Degree Name
Doctor of Philosophy (PhD)
Abstract
To construct a complete representation of a scene with environmental obstacles such as fog, smoke, darkness, or textural homogeneity, multisensor video streams captured in diferent modalities are considered. A computational method for automatically fusing multimodal image streams into a highly informative and unified stream is proposed. The method consists of the following steps: 1. Image registration is performed to align video frames in the visible band over time, adapting to the nonplanarity of the scene by automatically subdividing the image domain into regions approximating planar patches
2. Wavelet coefficients are computed for each of the input frames in each modality
3. Corresponding regions and points are compared using spatial and temporal information across various scales
4. Decision rules based on the results of multimodal image analysis are used to combine thewavelet coefficients from different modalities
5. The combined wavelet coefficients are inverted to produce an output frame containing useful information gathered from the available modalities
Experiments show that the proposed system is capable of producing fused output containing the characteristics of color visible-spectrum imagery while adding information exclusive to infrared imagery, with attractive visual and informational properties.
Page Count
97
Department or Program
Department of Computer Science and Engineering
Year Degree Awarded
2014
Copyright
Copyright 2014, all rights reserved. This open access ETD is published by Wright State University and OhioLINK.