Document Type

Conference Proceeding

Publication Date

2003

Abstract

We present a class of unsupervised statistical learning algorithms that are formulated in terms of minimizing Bregman divergences— a family of generalized entropy measures defined by convex functions. We obtain novel training algorithms that extract hidden latent structure by minimizing a Bregman divergence on training data, subject to a set of non-linear constraints which consider hidden variables. An alternating minimization procedure with nested iterative scaling is proposed to find feasible solutions for the resulting constrained optimization problem. The convergence of this algorithm along with its information geometric properties are characterized.

Comments

This paper was presented at the 14th International Conference, ALT 2003, Sapporo, Japan, October 17-19, 2003.

Author's final version of article is posted. The Definitive Version of Record can be found at http://link.springer.com/chapter/10.1007%2F978-3-540-39624-6_16 .

DOI

10.1007/978-3-540-39624-6_16


Share

COinS