CONF Lundin-96/IDIAP Connectionist Quantization Functions Lundin, Tomas Fiesler, Emile Moerland, Perry EXTERNAL https://publications.idiap.ch/attachments/reports/1996/96sipar.pdf PUBLIC Scientific and Parallel Computing Group, University of Geneva - Proceedings of the '96 SIPAR-Workshop on Parallel and Distributed Computing Geneva, Switzerland 1996 33-36 One of the main strengths of connectionist systems, also known as neural networks, is their massive parallelism. However, most neural networks are simulated on serial computers where the advantage of massive parallelism is lost. For large and real-world applications, parallel hardware implementations are therefore essential. Since a discretization or quantization of the neural network parameters is of great benefit for both analog and digital hardware implementations, they are the focus of study in this paper. In 1987 a successful weight discretization method was developed, which is flexible and produces networks with few discretization levels and without significant loss of performance. However, recent studies have shown that the chosen quantization function is not optimal. In this paper, new quantization functions are introduced and evaluated for improving the performance of this flexible weight discretization method.