A neural network architecture for autonomous learning, recognition, and prediction in a nonstationary world

Author(s): Carpenter, G.A. | Grossberg, S. |

Year: 1995

Citation: In S.F. Zornetzer, J.L. Davis, C. Lau, & T. McKenna (Eds.), An Introduction to Neural and Electronic Networks, Second Edition, San Diego, CA: Academic Press, 465-482.

Abstract: In a constantly changing world, humans are adapted to alternate routinely between attending to familiar objects and testing hypotheses about novel ones. We can rapidly learn to recognize and name novel objects without unselectively disrupting our memories of familiar ones. We can notice fine details that differentiate nearly identical objects and generalize across broad classes of dissimilar objects. This chapter describes a class of self-organizing neural network architectures ? called ARTMAP ? that are capable of fast, yet stable, on-line recognition learning, hypothesis testing, and naming in response to an arbitrary stream of input patterns (Carpenter, Grossberg, Markuzon, Reynolds, and Rosen, 1992; Carpenter, Grossberg, and Reynolds, 1991). The intrinsic stability of ARTMAP allows the system to learn incrementally for an unlimited period of time. System stability properties can be traced to the structure of its learned memories, which encode clusters of attended features into its recognition categories, rather than slow averages of category inputs. The level of predictive success: an error due to over-generalization automatically focuses attention on additional input details, enough of which are learned in a new recognition category so that the predictive errors will not be repeated.

Topics: Machine Learning, Models: ARTMAP,

PDF download




Cross References