Publication Date

2018

Document Type

Dissertation

Committee Members

Jack Jean (Committee Member), Michael Raymer (Advisor), Krishnaprasad Thirunarayan (Committee Member), Shaojun Wang (Advisor), Xinhui Zhang (Committee Member)

Degree Name

Doctor of Philosophy (PhD)

Abstract

Recurrent neural language (RNN) models are the state-of-the-art method for language modeling. When the vocabulary size is large, the space taken to store the model parameters becomes the bottleneck for the use of these type of models. We introduce a simple space compression method that stochastically shares the structured parameters at both the input and output embedding layers of RNN models to significantly reduce the size of model parameters, but still compactly represents the original input and the output embedding layers. The method is easy to implement and tune. Experiments on several data sets show that the new method achieves perplexity and BLEU score results comparable to the best existing methods, while only using a tiny fraction of the parameters required by other approaches.

Page Count

75

Department or Program

Department of Computer Science and Engineering

Year Degree Awarded

2018


Share

COinS