logo Idiap Research Institute        
 [BibTeX] [Marc21]
The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks
Type of publication: Journal paper
Citation: Thimm-96.1
Journal: Neural Computation
Volume: 8
Number: 02
Year: 1996
ISSN: 0899-7667
Abstract: The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the non-standard gain of optical sigmoids for optical neural networks.
Userfields: language={English}, ipdmembership={neuron learning},
Keywords: (adaptive) learning rate, activation function, adaptive steepness, backpropagation, bias, connectionism, gain, initial weight, multilayer neural network, neural computation, neural computing, neural network, neurocomputing, optical implementation, sigmoid steepness, slope
Projects Idiap
Authors Thimm, Georg
Moerland, Perry
Fiesler, Emile
Added by: [UNK]
Total mark: 0
Attachments
  • gain96.pdf
Notes