On Training Feed Forward Neural Networks For Approximation Problems


In this paper, many modified and new algorithms have been proposed for training feed-forward neural networks; many of them having a very fast convergence rate for reasonable size networks.We examine the similarities and differences between different training methods and compare the performance of training with each representation applied to the approximation problem. In all of these algorithms we use the gradient of the performance function (energy function, error function) to determine how to adjust the weights such that the performance function is minimized, where the back propagation algorithm has been used to increase the speed of training. The above algorithms have a variety of different computation and thus different type of form of search direction and storage requirements; however none of the above algorithms has a global properties which suited to all problems.