Competitive learning: From interactive activation to adaptive resonance

Author(s): Grossberg, S. |

Year: 1987

Citation: Cognitive Science, 11, 23-63

Abstract: Functional and mechanistic comparisons are made between several network models of cognitive processing: competitive learning, interactive activation, adaptive resonance, and back propagation. The starting point of this comparison is the article of Rumelhart and Zipser (1985) on feature discovery through competitive learning. All the models which Rumelhart and Zipser (1985) have described were shown in Grossberg (1976b) to exhibit a type of learning which is temporally unstable. Competitive learning mechanisms can be stabilized in response to an arbitrary input environment by being supplemented with mechanisms for learning top-down expectancies, or templates; for matching bottom-up input patterns with the top-down expectancies; and for releasing orienting reactions in a mismatch situation, thereby updating short-term memory and searching for another internal representation. Network architectures which embody all of these mechanisms were called adaptive resonance models by Grossberg (1976c). Self-stabilizing learning models are candidates for use in real-world applications where unpredictable changes can occur in complex input environments. Competitive learning postulates are inconsistent with the postulates of the interactive activation model of McClelland and Rumelhart (1981), and suggest different levels of processing and interaction rules for the analysis of word recognition. Adaptive resonance models use these alternative levels and interaction rules. The selforganizing learning of an adaptive resonance model is compared and contrasted with the teacher-directed learning of a back propagation model. A number of criteria for evaluating real-time network models of cognitive processing are described and applied.

Topics: Machine Learning, Applications: Character Recognition, Models: ART 1, Other,

PDF download




Cross References