Browse Bar: Browse by Author | Browse by Category | Browse by Citation | Advanced Search
Author(s): Grossberg, S. |
Year: 1988
Citation: Neural Networks, 1 , 17-61.
Abstract: An historical discussion is provided of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentieth-century scientific movements. The nonlinear, nonstationary, and nonlocal nature of behavioral and brain data are emphasized. Three sources of contemporary neural network research -- the binary, linear, and continuous-nonlinear models -- are noted. The remainder of the article describes results about continuous-nonlinear models: Many models of content-addressable memory are shown to be special cases of the Cohen-Grossberg model and global Liapunov function, including the additive, brain-state-in-a-box, McCulloch-Pitts, Boltzmann machine, Hartline-Ratliff-Miller; shunting, masking field, bidirectional associative memory, Volterra-Lotka, Gilpin-Ayala, and Eigen-Schuster models. A Liapunov functional method is described for proving global limit or oscillation theorems for nonlinear competitive systems when their decision schemes are globally consistent or inconsistent, respectively. The former case is illustrated by a model of a globally stable economic market, and the latter case is illustrated by a model of the voting paradox. Key properties of shunting competitive feedback networks are summarized, including the role of sigmoid signaling, automatic gain control, competitive choice and quantization, tunable filtering, total activity normalization, and noise suppression in pattern transformation and memory storage applications. Connections to models of competitive learning, vector quantization, and categorical perception are noted. Adaptive resonance theory (ART) models for self-stabilizing adaptive pattern recognition in response to complex real-time nonstationary input environments are compared with off-line models such as autoassociators, the Boltzmann machine, and back propagation. Special attention is paid to the stability and capacity of these models, and to the role of top-down expectations and attentional processing in the active regulation of both learning and fast information processing. Models whose performance and learning are regulated by internal gating and matching signals, or by external environmentally generated error signals, are contrasted with models whose learning is regulated by external teacher signals that have no analog in natural real-time environments. Examples from sensory-motor control of adaptive vector encoders, adaptive coordinate transformations, adaptive gain control by visual error signals, and automatic generation of synchronous multijoint movement trajectories illustrate the former model types. Internal matching processes are shown capable of discovering several different types of invariant environmental properties. These include ART mechanisms which discover recognition invariants, adaptive vector encoder mechanisms which discover movement invariants, and autoreceptive associative mechanisms which discover invariants of self-regulating target position maps.
Topics:
Biological Learning,
Machine Learning,
Models:
ART 1,
Other,