Entropy based comparison of neural networks for classification [electronic resource].
- Washington, D.C. : United States. Dept. of Energy, 1997.
Oak Ridge, Tenn. : Distributed by the Office of Scientific and Technical Information, U.S. Dept. of Energy.
- Physical Description:
- 8 pages : digital, PDF file
- Additional Creators:
- Los Alamos National Laboratory
United States. Department of Energy
United States. Department of Energy. Office of Scientific and Technical Information
- In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective tools in many different applications. A natural and essential step in continuing the diffusion of these tools in day by day use is their hardware implementation which is by far the most cost effective solution for large scale use. When the hardware implementation is contemplated, the issue of the size of the NN becomes crucial because the size is directly proportional with the cost of the implementation. In this light, any theoretical results which establish bounds on the size of a NN for a given problem is extremely important. In the same context, a particularly interesting case is that of the neural networks using limited integer weights. These networks are particularly suitable for hardware implementation because they need less space for storing the weights and the fixed point, limited precision arithmetic has much cheaper implementations in comparison with its floating point counterpart. This paper presents an entropy based analysis which completes, unifies and correlates results partially presented in [Beiu, 1996, 1997a] and [Draghici, 1997]. Tight bounds for real and integer weight neural networks are calculated.
- Published through SciTech Connect.
WIRN VIETRI `97: 9. Italian workshop on neural nets, Salerno (Italy), 22-24 May 1997.
Beiu, V.; Draghici, S.
- Funding Information:
View MARC record | catkey: 14349148