Document Type
Conference Proceeding
Publication Date
2003
Abstract
We present a class of unsupervised statistical learning algorithms that are formulated in terms of minimizing Bregman divergences— a family of generalized entropy measures defined by convex functions. We obtain novel training algorithms that extract hidden latent structure by minimizing a Bregman divergence on training data, subject to a set of non-linear constraints which consider hidden variables. An alternating minimization procedure with nested iterative scaling is proposed to find feasible solutions for the resulting constrained optimization problem. The convergence of this algorithm along with its information geometric properties are characterized.
Repository Citation
Wang, S.,
& Schuurmans, D.
(2003). Learning Continuous Latent Variable Models with Bregman Divergences. Lecture Notes in Computer Science, 2842, 190-204.
https://corescholar.libraries.wright.edu/knoesis/101
DOI
10.1007/978-3-540-39624-6_16
Included in
Bioinformatics Commons, Communication Technology and New Media Commons, Databases and Information Systems Commons, OS and Networks Commons, Science and Technology Studies Commons
Comments
This paper was presented at the 14th International Conference, ALT 2003, Sapporo, Japan, October 17-19, 2003.
Author's final version of article is posted. The Definitive Version of Record can be found at http://link.springer.com/chapter/10.1007%2F978-3-540-39624-6_16 .