![]() |
|
![]()
|
|
![]() |
Multiple-valued logic neural network architectures and algorithmsThe binary model of the artificial neurons does
not describe the complexity of biological neurons fully since the neurons
actually handle continuous data. However, analog neurons implemented in an
integrated chip require high precision resistors and are easily affected
by electrical noise. Because of the problems associated with the binary and
analog neurons, research on multilevel neural networks for modeling the biological
neurons has attracted great attention. A technique for the training of multiple-valued
neural networks based on back-propagation learning algorithm employing a
multilevel threshold function is developed. The optimum threshold width of
the multilevel function and the range of the learning parameter to be chosen
for convergence are the important parameters to be derived. Trials performed
on a benchmark problem demonstrate the convergence of the network within
the specified range of parameters.
References [1] J. Yuh and R.W. Newcomb, “Circuits for multilevel non-linearities,” Proc. Int. Joint Conf. Neural Networks, vol. II, pp. 27-32, 1992. [2] A. Ngom, I. Stojmenovic, and V. Milutinovic, “STRIP – A strip based neural network growth algorithm for learning multiple valued functions,” IEEE Trans. Neural Networks, vol. 12, no. 2, pp. 212-227, 2001. [3] T. Watanabe, M. Matsumoto, M. Enokida, and T. Hasegawa, “A design of multiple-valued logic neuron,” Proc. 20th Int. Symp. Multiple-Valued Logic, pp. 418-425, 1990. [4] Z. Tang, C. Qi-xin, O. Ishizuka, and H. Matsumoto, “Algorithm and implementation of a learning multiple-valued logic network,” Proc. 23rd Int. Symp. Multiple-Valued Logic, pp. 202-207, 1993. [5] K. V. Asari and C. Eswaran, “A supervised learning neural network for self-organized mapping of multiple-valued patterns,” J. Neural, Parallel and Scientific Comput., vol. 7, pp. 441-452, 1999. [6] K. V. Asari and C. Eswaran, “Bi-directional multiple-valued neural network for pattern recognition and associative recall,” J. Imaging Syst. and Tech., vol. 11, pp. 125-129, 2000. [7] D. E. Rumelhart, D. E. Hinton, and R. J. Williams, “Learning representations by back- propagating errors,” Nature, vol. 323, pp. 533-536, 1986. [8] K. V. Asari, "Training of a feed-forward multiple-valued neural network by error back-propagation with a multilevel threshold function," IEEE Transactions on Neural Networks, vol. 12, no. 6, pp. 1519-1521, Nov. 2001 |
![]() |
||
![]() |
VLSI Systems Laboratory
|
![]() |
![]() |