Connectionist Quantization Functions
Type of publication: | Conference paper |
Citation: | Lundin-96 |
Booktitle: | Proceedings of the '96 SIPAR-Workshop on Parallel and Distributed Computing |
Year: | 1996 |
Location: | Geneva, Switzerland |
Organization: | Scientific and Parallel Computing Group, University of Geneva |
Abstract: | One of the main strengths of connectionist systems, also known as neural networks, is their massive parallelism. However, most neural networks are simulated on serial computers where the advantage of massive parallelism is lost. For large and real-world applications, parallel hardware implementations are therefore essential. Since a discretization or quantization of the neural network parameters is of great benefit for both analog and digital hardware implementations, they are the focus of study in this paper. In 1987 a successful weight discretization method was developed, which is flexible and produces networks with few discretization levels and without significant loss of performance. However, recent studies have shown that the chosen quantization function is not optimal. In this paper, new quantization functions are introduced and evaluated for improving the performance of this flexible weight discretization method. |
Userfields: | dates={October, 4, 1996}, ipdmembership={learning}, |
Keywords: | |
Projects |
Idiap |
Authors | |
Added by: | [UNK] |
Total mark: | 0 |
Attachments
|
|
Notes
|
|
|