Đang chuẩn bị liên kết để tải về tài liệu:
Information Theory, Inference, and Learning Algorithms phần 8
Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
Tuy nhiên, mật độ xác suất tại gốc ek / 2 10.217 lần lớn hơn mật độ ở vỏ này mà hầu hết các Bản quyền thuộc Đại học Cambridge năm 2003. Xem trên màn hình cho phép. In ấn không được phépn có thể mua cuốn sách này cho £ 30 hoặc $ 50. Xem http://www.inference.phy.cam.ac.uk/mackay/itila/ cho các liên kết. | Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http www.cambridge.org 0521642981 You can buy this book for 30 pounds or 50. See http www.inference.phy.cam.ac.uk mackay itila for links. 34 Independent Component Analysis and Latent Variable Modelling 34.1 Latent variable models Many statistical models are generative models that is models that specify a full probability density over all variables in the situation that make use of latent variables to describe a probability distribution over observables. Examples of latent variable models include Chapter 22 s mixture models which model the observables as coming from a superposed mixture of simple probability distributions the latent variables are the unknown class labels of the examples hidden Markov models Rabiner and Juang 1986 Durbin et al. 1998 and factor analysis. The decoding problem for error-correcting codes can also be viewed in terms of a latent variable model - figure 34.1. In that case the encoding matrix G is normally known in advance. In latent variable modelling the parameters equivalent to G are usually not known and must be inferred from the data along with the latent variables s. Usually the latent variables have a simple distribution often a separable distribution. Thus when we fit a latent variable model we are finding a description of the data in terms of independent components . The independent component analysis algorithm corresponds to perhaps the simplest possible latent variable model with continuous latent variables. 34.2 The generative model for independent component analysis A set of N observations D x n N 1 are assumed to be generated as follows. Each J-dimensional vector x is a linear mixture of I underlying source signals s x Gs 34.1 where the matrix of mixing coefficients G is not known. The simplest algorithm results if we assume that the number of sources is equal to the number of observations i.e. I J. Our aim is to recover the source .