tailieunhanh - Mạng thần kinh thường xuyên cho dự đoán P7
Stability Issues in RNN Architectures Perspective The focus of this chapter is on stability and convergence of relaxation realised through NARMA recurrent neural networks. Unlike other commonly used approaches, which mostly exploit Lyapunov stability theory, the main mathematical tool employed in this analysis is the contraction mapping theorem (CMT), together with the fixed point iteration (FPI) technique. This enables derivation of the asymptotic stability (AS) and global asymptotic stability (GAS) criteria for neural relaxive systems. For rigour, existence, uniqueness, convergence and convergence rate are considered and the analysis is provided for a range of activation functions and recurrent neural networks architectures. . | Recurrent Neural Networks for Prediction Authored by Danilo P. Mandic Jonathon A. Chambers Copyright 2001 John Wiley Sons Ltd ISBNs 0-471-49517-4 Hardback 0-470-84535-X Electronic 7 Stability Issues in RNN Architectures Perspective The focus of this chapter is on stability and convergence of relaxation realised through NARMA recurrent neural networks. Unlike other commonly used approaches which mostly exploit Lyapunov stability theory the main mathematical tool employed in this analysis is the contraction mapping theorem CMT together with the fixed point iteration FPI technique. This enables derivation of the asymptotic stability AS and global asymptotic stability GAS criteria for neural relaxive systems. For rigour existence uniqueness convergence and convergence rate are considered and the analysis is provided for a range of activation functions and recurrent neural networks architectures. Introduction Stability and convergence are key issues in the analysis of dynamical adaptive systems since the analysis of the dynamics of an adaptive system can boil down to the discovery of an attractor a stable equilibrium or some other kind of fixed point. In neural associative memories for instance the locally stable equilibrium states attractors store information and form neural memory. Neural dynamics in that case can be considered from two aspects convergence of state variables memory recall and the number position local stability and domains of attraction of equilibrium states memory capacity . Conveniently LaSalle s invariance principle LaSalle 1986 is used to analyse the state convergence whereas stability of equilibria are analysed using some sort of linearisation Jin and Gupta 1996 . In addition the dynamics and convergence of learning algorithms for most types of neural networks may be explained and analysed using fixed point theory. Let us first briefly introduce some basic definitions. The full definitions and further details are given in Appendix I. .
đang nạp các trang xem trước