Publication Date

2019

Document Type

Dissertation

Committee Members

Mateen Rizki (Advisor), John Gallagher (Committee Member), Michael Raymer (Committee Member), Fred Garber (Committee Member), Bernard Abayowa (Committee Member)

Degree Name

Doctor of Philosophy (PhD)

Abstract

Current commercial tracking systems do not process images fast enough to perform target-tracking in real- time. State-of-the-art methods use entire scenes to locate objects frame-by-frame and are commonly computationally expensive because they use image convolutions. Alternatively, attention mechanisms track more efficiently by mimicking human optical cognitive interaction to only process small portions of an image. Thus, in this work we use an attention-based approach to create a model called C-DATM (Conditional Dilated Attention tracking Model) that learns to compare target features in a sequence of image-frames using dilated convolutions. The C-DATM is tested using the Modified National Institute of Standards and Technology handwritten digits. We also compare the results achieved by C-DATM to the results achieved by other attention-based networks like Deep Recurrent Attentive Writer and Recurrent Attention Tracking Model that appear in the literature. C-DATM builds on previous attention principles to achieve generic, efficient, and recurrent-less object tracking. The GOTURN(General Object Tracking using Regression Networks) model which won the VOT 2014 dataset challenge contains similar operating principles to C-DATM and is used as an exemplar to explore the advantages and disadvantages C-DATM. The results of this comparison demonstrate that C-DATM has a number of significant advantage over GOTURN including faster processing of image sequences and the ability to generalize to tracking new targets without retraining the system.

Page Count

157

Department or Program

Department of Computer Science and Engineering

Year Degree Awarded

2019


Share

COinS