Đang chuẩn bị liên kết để tải về tài liệu:
Mạng thần kinh thường xuyên cho dự đoán P4
Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
Activation Functions Used in Neural Networks Perspective The choice of nonlinear activation function has a key influence on the complexity and performance of artificial neural networks, note the term neural network will be used interchangeably with the term artificial neural network. The brief introduction to activation functions given in Chapter 3 is therefore extended. Although sigmoidal nonlinear activation functions are the most common choice, there is no strong a priori justification why models based on such functions should be preferred to others | Recurrent Neural Networks for Prediction Authored by Danilo P. Mandic Jonathon A. Chambers Copyright 2001 John Wiley Sons Ltd ISBNs 0-471-49517-4 Hardback 0-470-84535-X Electronic 4 Activation Functions Used in Neural Networks 4.1 Perspective The choice of nonlinear activation function has a key influence on the complexity and performance of artificial neural networks note the term neural network will be used interchangeably with the term artificial neural network. The brief introduction to activation functions given in Chapter 3 is therefore extended. Although sigmoidal nonlinear activation functions are the most common choice there is no strong a priori justification why models based on such functions should be preferred to others. We therefore introduce neural networks as universal approximators of functions and trajectories based upon the Kolmogorov universal approximation theorem which is valid for both feedforward and recurrent neural networks. From these universal approximation properties we then demonstrate the need for a sigmoidal activation function within a neuron. To reduce computational complexity approximations to sigmoid functions are further discussed. The use of nonlinear activation functions suitable for hardware realisation of neural networks is also considered. For rigour we extend the analysis to complex activation functions and recognise that a suitable complex activation function is a Mobius transformation. In that context a framework for rigorous analysis of some inherent properties of neural networks such as fixed points nesting and invertibility based upon the theory of modular groups of Mobius transformations is provided. All the relevant definitions theorems and other mathematical terms are given in Appendix B and Appendix C. 4.2 Introduction A century ago a set of 23 originally unsolved problems in mathematics was proposed by David Hilbert Hilbert 1901-1902 . In his lecture Mathematische Probleme at the second International Congress of