CONF
Saxena-97.3/IDIAP
Handwritten Digit Recognition with Binary Optical Perceptron
Saxena, Indu
Moerland, Perry
Fiesler, Emile
Pourzand, A. R.
Gerstner, W.
Ed.
Germond, A.
Ed.
Hasler, M.
Ed.
Nicoud, J. -D.
Ed.
EXTERNAL
https://publications.idiap.ch/attachments/reports/1997/rr97-15.pdf
PUBLIC
https://publications.idiap.ch/index.php/publications/showcite/saxena-97.4
Related documents
Proceedings of the International Conference on Artificial Neural Networks (ICANN'97)
Lecture Notes in Computer Science
1327
1253-1258
1997
Springer-Verlag
Berlin
IDIAP-RR 97-15
Binary weights are favored in electronic and optical hardware implementations of neural networks as they lead to improved system speeds. Optical neural networks based on fast ferroelectric liquid crystal binary level devices can benefit from the many orders of magnitudes improved liquid crystal response times. An optimized learning algorithm for all-positive perceptrons is simulated on a limited data set of hand-written digits and the resultant network implemented optically. First, gray-scale and then binary inputs and weights are used in recall mode. On comparing the results for the example data set, the binarized inputs and weights network shows almost no loss in performance.
REPORT
Saxena-97.4/IDIAP
Handwritten Digit Recognition with Binary Optical Perceptron
Saxena, Indu
Moerland, Perry
Fiesler, Emile
Pourzand, A. R.
EXTERNAL
https://publications.idiap.ch/attachments/reports/1997/rr97-15.pdf
PUBLIC
Idiap-RR-15-1997
1997
IDIAP
Published in ``Proceedings of the International Conference on Artificial Neural Networks (ICANN'97)''
Binary weights are favored in electronic and optical hardware implementations of neural networks as they lead to improved system speeds. Optical neural networks based on fast ferroelectric liquid crystal binary level devices can benefit from the many orders of magnitudes improved liquid crystal response times. An optimized learning algorithm for all-positive perceptrons is simulated on a limited data set of hand-written digits and the resultant network implemented optically. First, gray-scale and then binary inputs and weights are used in recall mode. On comparing the results for the example data set, the binarized inputs and weights network shows almost no loss in performance.