tailieunhanh - Independent component analysis P10

ICA by Minimization of Mutual Information An important approach for independent component analysis (ICA) estimation, inspired by information theory, is minimization of mutual information. The motivation of this approach is that it may not be very realistic in many cases to assume that the data follows the ICA model. Therefore, we would like to develop an approach that does not assume anything about the data. What we want to have is a general-purpose measure of the dependence of the components of a random vector. Using such a measure, we could define ICA as a linear decomposition that minimizes that dependence measure | Independent Component Analysis. Aapo Hyvarinen Juha Karhunen Erkki Oja Copyright 2001 John Wiley Sons Inc. ISBNs 0-471-40540-X Hardback 0-471-22131-7 Electronic 10 ICA by Minimization of Mutual Information An important approach for independent component analysis ICA estimation inspired by information theory is minimization of mutual information. The motivation of this approach is that it may not be very realistic in many cases to assume that the data follows the ICA model. Therefore we would like to develop an approach that does not assume anything about the data. What we want to have is a general-purpose measure of the dependence of the components of a random vector. Using such a measure we could define ICA as a linear decomposition that minimizes that dependence measure. Such an approach can be developed using mutual information which is a well-motivated information-theoretic measure of statistical dependence. One of the main utilities of mutual information is that it serves as a unifying framework for many estimation principles in particular maximum likelihood ML estimation and maximization of nongaussianity. In particular this approach gives a rigorous justification for the heuristic principle of nongaussianity. DEFINING ICA BY MUTUAL INFORMATION Information-theoretic concepts The information-theoretic concepts needed in this chapter were explained in Chapter 5. Readers not familiar with information theory are advised to read that chapter before this one. 221 222 ICA BY MINIMIZATION OF MUTUAL INFORMATION We recall here very briefly the basic definitions of information theory. The differential entropy JT of a random vector y with density p y is defined as -ff y - y p y logp y dy Entropy is closely related to the code length of the random vector. A normalized version of entropy is given by negentropy J which is defined as follows J y H ygauss - K y where ysouss is a gaussian random vector of the same covariance or correlation matrix as y. .

TÀI LIỆU LIÊN QUAN
31    537    63
TỪ KHÓA LIÊN QUAN