Publication Date

2015

Document Type

Thesis

Committee Members

John Gallagher (Committee Member), Michael Raymer (Committee Member), Mateen Rizki (Advisor), Andres Rodriguez (Committee Member)

Degree Name

Master of Science (MS)

Abstract

Convolutional neural networks (CNNs) are currently state-of-the-art for various classification tasks, but are computationally expensive. Propagating through the convolutional layers is very slow, as each kernel in each layer must sequentially calculate many inner products for a single forward and backward propagation which equates to O(N^2 n^2) per kernel per layer where the inputs are N x N arrays and the kernels are n x n arrays. Convolution can be efficiently performed as a Hadamard product in the frequency domain. The bottleneck is the transformation which has a cost of O(N^2 log_2 N) using the fast Fourier transform (FFT). However, the increase in efficiency is less significant when N >> n as is the case in CNNs. We mitigate this by using the ``overlap-and-add'' technique reducing the computational complexity to O(N^2 log_2 n) per kernel. This method increases the algorithm's efficiency in both the forward and backward propagation, significantly reducing the training and testing time for CNNs. Our empirical results show our method reduces computational time by a factor of up to 50.4 times the traditional convolution implementation.

Page Count

51

Department or Program

Department of Computer Science

Year Degree Awarded

2015

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License.


Share

COinS