Document Type
Conference Proceeding
Publication Date
8-2003
Abstract
We present a new approach to estimating mixture models based on a new inference principle we have proposed: the latent maximum entropy principle (LME). LME is different both from Jaynes’ maximum entropy principle and from standard maximum likelihood estimation. We demonstrate the LME principle by deriving new algorithms for mixture model estimation, and show how robust new variants of the EM algorithm can be developed. Our experiments show that estimation based on LME generally yields better results than maximum likelihood estimation, particularly when inferring latent variable models from small amounts of data.
Repository Citation
Wang, S.,
Schuurmans, D.,
Peng, F.,
& Zhao, Y.
(2003). Learning Mixture Models with the Latent Maximum Entropy Principle. Proceedings of the 20th International Conference on Machine Learning, 784-791.
https://corescholar.libraries.wright.edu/knoesis/1011
Included in
Bioinformatics Commons, Communication Technology and New Media Commons, Databases and Information Systems Commons, OS and Networks Commons, Science and Technology Studies Commons
Comments
Presented at the 20th International Conference on Machine Learning, Washington, DC, August 21-24, 2003.