Absolutely stable learning of recognition codes by a self-organizing neural network

Author(s): Carpenter, G.A. | Grossberg, S. |

Year: 1986

Citation: In J.S. Denker (Ed.), Neural Networks for Computing, American Institute of Physics, 151, 77-85.

Abstract: A neural network which self-organizes and self-stabilizes its recognition codes in response to arbitrarily orderings of arbitrarily man y and arbitrarily complex binary input patterns is here outline. Top-down attentional and matching mechanisms are critical in self-stabilizing the code learning process. The architecture embodies a parallel search scheme which updates itself adaptively as the learning process unfolds. After learning self-stabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes, or categories, without any search. Thus recognition time does not grow as a function of code complexity. A novel input pattern can directly access a category if it shares invariant properties with the set of familiar exemplars of that category. These invariant properties emerge in the form of learned critical feature patterns, or prototypes. The architecture possesses a contest-sensitive self-scaling property which enables its emergent critical feature patterns to form. They detect and remember statistically predictive configurations of featural elements which are derived from the set of all input patterns that are ever experienced. Four types of attentional process ? priming, gain control, vigilance, and intermodal competition ? are mechanistically characterized. Top-down priming and gain control are needed for code matching and self-stabilization. Attentional vigilance determines how fine the learned categories will be. If vigilance increases due to an environmental disconfirmation, then the system automatically creases due to an environmental disconfirmation, then the system automatically searches for and learns finer recognition categories. A new nonlinear matching law (the 2/3 Rule) and new nonlinear associative laws (the Weber Law rule, the Associative Decay Rule, and the Template Learning Rule) are needed to achieve these properties. All the rules describe emergent properties of parallel network interactions. The architecture circumvents the saturation, capacity, orthogonality, and linear predictability that limit the codes which can be stably learned by alternative recognition models.

Topics: Machine Learning, Mathematical Foundations of Neural Networks, Models: Other,

PDF download

Cross References