tailieunhanh - Mạng thần kinh thường xuyên cho dự đoán P8
Data-Reusing Adaptive Learning Algorithms In this chapter, a class of data-reusing learning algorithms for recurrent neural networks is analysed. This is achieved starting from a case of feedforward neurons, through to the case of networks with feedback, trained with gradient descent learning algorithms. It is shown that the class of data-reusing algorithms outperforms the standard (a priori ) algorithms for nonlinear adaptive filtering in terms of the instantaneous prediction error. | Recurrent Neural Networks for Prediction Authored by Danilo P. Mandic Jonathon A. Chambers Copyright 2001 John Wiley Sons Ltd ISBNs 0-471-49517-4 Hardback 0-470-84535-X Electronic 8 Data-Reusing Adaptive Learning Algorithms Perspective In this chapter a class of data-reusing learning algorithms for recurrent neural networks is analysed. This is achieved starting from a case of feedforward neurons through to the case of networks with feedback trained with gradient descent learning algorithms. It is shown that the class of data-reusing algorithms outperforms the standard a priori algorithms for nonlinear adaptive filtering in terms of the instantaneous prediction error. The relationships between the a priori and a posteriori errors learning rate and the norm of the input vector are derived in this context. Introduction The so-called a posteriori error estimates provide us with roughly speaking some information after computation. From a practical point of view they are valuable and useful since real-life problems are often nonlinear large ill-conditioned unstable or have multiple solutions and singularities Hlavacek and Krizek 1998 . The a posteriori error estimators are local in a computational sense and the computational complexity of a posteriori error estimators should be far less expensive than the computation of an exact numerical solution of the problem. An account of the essence of a posteriori techniques is given in Appendix F. In the area of linear adaptive filters the most comprehensive overviews of a posteriori techniques can be found in Treichler 1987 and Ljung and Soderstrom 1983 . These techniques are also known as data-reusing techniques Douglas and Rupp 1997 Roy and Shynk 1989 Schnaufer and Jenkins 1993 Sheu et al. 1992 . The quality of an a posteriori error estimator is often measured by its efficiency index . the ratio of the estimated error to the true error. It has been shown that the a posteriori approach in the neural network .
đang nạp các trang xem trước