tailieunhanh - Mạng thần kinh thường xuyên cho dự đoán P10
Convergence of Online Learning Algorithms in Neural Networks An analysis of convergence of real-time algorithms for online learning in recurrent neural networks is presented. For convenience, the analysis is focused on the real-time recurrent learning (RTRL) algorithm for a recurrent perceptron. Using the assumption of contractivity of the activation function of a neuron and relaxing the rigid assumptions of the fixed optimal weights of the system, the analysis presented is general and is applicable to a wide range of existing algorithms | Recurrent Neural Networks for Prediction Authored by Danilo P. Mandic Jonathon A. Chambers Copyright 2001 John Wiley Sons Ltd ISBNs 0-471-49517-4 Hardback 0-470-84535-X Electronic 10 Convergence of Online Learning Algorithms in Neural Networks Perspective An analysis of convergence of real-time algorithms for online learning in recurrent neural networks is presented. For convenience the analysis is focused on the real-time recurrent learning RTRL algorithm for a recurrent perceptron. Using the assumption of contractivity of the activation function of a neuron and relaxing the rigid assumptions of the fixed optimal weights of the system the analysis presented is general and is applicable to a wide range of existing algorithms. It is shown that some of the results obtained for stochastic gradient algorithms for linear systems can be considered as a bound for stability of RNN-based algorithms as long as the contractivity condition holds. Introduction The following criteria Bershad et al. 1990 are most commonly used to assess the performance of adaptive algorithms. 1. Convergence consistency of the statistics . 2. Transient behaviour how quickly the algorithm reacts to changes in the statistics of the input . 3. Convergence rate how quickly the algorithm approaches the optimal solution which can be linear quadratic or superlinear. The standard approach for the analysis of convergence of learning algorithms for linear adaptive filters is to look at convergence of the mean weight error vector convergence in the mean square and at the steady-state misadjustment Gholkar 1990 Haykin 1996a Kuan and Hornik 1991 Widrow and Stearns 1985 . The analysis of convergence of steepest-descent-based algorithms has been ongoing ever since their 162 INTRODUCTION introduction Guo and Ljung 1995 Ljung 1984 Slock 1993 Tarrab and Feuer 1988 . Some of the recent results consider the exact expectation analysis of the LMS algorithm for linear adaptive filters Douglas and Pan 1995 and
đang nạp các trang xem trước