A neural network architecture for fast on line supervised learning and pattern recognition

Author(s): Carpenter, G.A. | Grossberg, S. | Reynolds, J.H. |

Year: 1992

Citation: In H. Wechsler (Ed.), Neural Networks for Perception. Volume 1: Human and Machine Perception, New York: Academic Press, 248-264.

Abstract: This chapter describes a new neural network architecture, called ARTMAP (Carpenter, Grossberg, and Reynolds, 1991), that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors in recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory (Carpenter and Grossberg, 1987a, 1987b, 1988, 1990) modules that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training, the ART-A modules receives a stream of {Ap} input patterns, and ART-B receives a stream of {Bp} input patterns, where Bp is the correct prediction given Ap. These ART modules are linked by an associative learning network and an internal controller that ensure autonomous system operation in real time. During test trials, the remaining patterns Ap are presented without Bp, and their predictions at ART-B are compared with Bp.

Topics: Machine Learning, Models: ARTMAP,

PDF download

Cross References