tailieunhanh - Mạng thần kinh thường xuyên cho dự đoán P12

Exploiting Inherent Relationships Between Parameters in Recurrent Neural Networks Perspective Optimisation of complex neural network parameters is a rather involved task. It becomes particularly difficult for large-scale networks, such as modular networks, and for networks with complex interconnections, such as feedback networks. Therefore, if an inherent relationship between some of the free parameters of a neural network can be found, which holds at every time instant for a dynamical network, it would help to reduce the number of degrees of freedom in the optimisation task of learning in a particular network. . | Recurrent Neural Networks for Prediction Authored by Danilo P. Mandic Jonathon A. Chambers Copyright 2001 John Wiley Sons Ltd ISBNs 0-471-49517-4 Hardback 0-470-84535-X Electronic 12 Exploiting Inherent Relationships Between Parameters in Recurrent Neural Networks Perspective Optimisation of complex neural network parameters is a rather involved task. It becomes particularly difficult for large-scale networks such as modular networks and for networks with complex interconnections such as feedback networks. Therefore if an inherent relationship between some of the free parameters of a neural network can be found which holds at every time instant for a dynamical network it would help to reduce the number of degrees of freedom in the optimisation task of learning in a particular network. We derive such relationships between the gain 3 in the nonlinear activation function of a neuron P and the learning rate n of the underlying learning algorithm for both the gradient descent and extended Kalman filter trained recurrent neural networks. The analysis is then extended in the same spirit for modular neural networks. Both the networks with parallel modules and networks with nested serial modules are analysed. A detailed analysis is provided for the latter since the former can be considered a linear combination of modules that consist of feedforward or recurrent neural networks. For all these cases the static and dynamic equivalence between an arbitrary neural network described by 3 n and W k and a referent network described by 3R 1 nR and WR k are derived. A deterministic relationship between these parameters is provided which allows one degree of freedom less in the nonlinear optimisation task of learning in this framework. This is particularly significant for large-scale networks of any type. Introduction When using neural networks many of their parameters are chosen empirically. Apart from the choice of topology architecture and interconnection the parameters .

TỪ KHÓA LIÊN QUAN