Methods for Prediction Optimization of the Constrained State-Preserved Extreme Learning Machine

Document Type

Article

Publication Date

11-1-2020

Find this in a Library

Abstract

Finding the maximum testing accuracy in Machine Learning has been the goal since its conception. From this goal, neural networks have been the primary source of continual improvements in prediction performance. Traditionally, backpropagation has been the primary way of training neural networks and the Levenberg-Marquardt (LM) backpropagation has become the fastest method. Recently, the Extreme Learning Machine was introduced which randomizes weights and biases of hidden layers and uses the Moore-Penrose generalized inverse of a matrix to calculate the output weights and biases, providing competitive results at significantly faster training times. In this study, we continue our work on the Constrained State-Preserved Extreme Learning Machine (CSPELM) with a Forest optimization (CSPELMF) and \varepsilon constraint Rangefinder (CSPELMR). Furthermore, we provide hyper-parameter settings for the CSPELM to optimize accuracy over training time. Our results show that our methods outperformed the LM backpropagation in a majority of the 13 tested datasets and that the CSPELMF and CSPELMR matched or outperformed the CSPELM in all classification datasets.

DOI

10.1109/ICTAI50040.2020.00103

Catalog Record

Share

COinS