Home | Research Areas | Director | Research Students
Facilities | Research Publications | VLSI Courses at ODU
Research Collaborations | Important Links

New architectures and learning algorithms for recurrent neural networks 

A new learning algorithm named delta rule for training recurrent neural networks suitable for gray level pattern association is being developed in the VLSI Systems Laboratory. The learning technique is based on minimizing the maximum distance of the statistical properties of the relative magnitudes of data at two neurons for all training patterns. The mean distance represents the synaptic weight value between the two neurons. It is derived mathematically that the new learning algorithm is stable and is able to converge in three to five iterations. The performance of the learning algorithm is tested on various gray level patterns and it is observed that the recurrent network can learn and associate well to all the trained patterns.

References

[1].    J. A. Anderson, J. W. Silverstein, S. A. Ritz and R. S. Jones, “Distinctive features, categorized perception, and probability learning: some application of a neural model,” Psychology Review, vol. 84, pp. 413-451, 1977.

[2].    J. A. Anderson and M. Mozer, “Categorization and selective neurons,” in G.E. Hilton, & J. A. Anderson, (Eds.), Parallel Models of Associative Memory. Hillsdale, NJ: Erlbaum, pp. 213-236, 1981.

[3].    J. A. Anderson and G. L. Murphy, “Psychological concept in a parallel system,” in Evolution, Games, and Learning, North Holland, New York, 1986.

[4].    J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proc. Natl. Acad. Sci. USA, vol. 79, pp. 2554-2558, 1982.

[5].    J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proc. Natl. Acad. Sci. USA, vol. 81, pp. 3088-3099, 1984.

[6].    J. H. Byrne, “Cellular analysis of associative learning,” Physiological Review, 67, pp. 329-439, 1987.

[7].    Y. Bengio, S. Bengio, and J. Cloutier, "Learning synaptic learning rules," in Neural Networks for Computing, Snowbird, Utah, 1991.

[8].    S. Becker and C. Y. Le. “Improving the convergence of back-propagation learning with second-order methods,” Proceeding of the 1988 Connectionist Summer School, pp. 29-37, Morgan Kaufman, San Mateo, 1989.

[9].    C. L. Scofield, and L. N. Cooper, “Development and properties of neural network,” Contemporary Physics, vol. 26, pp. 125-145, 1985.

[10].    L. N. Cooper, “A possible organization of animal memory and learning,” in Proceedings of Nobel Symposium on Collective Properties of Physical System (B. Lundquist and S. Lundquist, ed.), Academic Press, New York, pp. 252-264, 1973.

[11].    S. Haykin, “Neural network: A Comprehensive foundation,” Macmillan, New York, 1994.

[12].    D.O. Hebb. “The Organization of Behavior,” Wiley, New York, 1949.

VLSI Systems Laboratory
Department of Electrical and Computer Engineering
College of Engineering and Technology
Old Dominion University
Norfolk, VA 23529, USA