Distributed outstar learning and the rules of synaptic transmission

Author(s): Carpenter, G.A. |

Year: 1993

Citation: Proceedings of the World Congress on Neural Networks (WCNN 93), II 397-404.

Abstract: The distributed outstar, a generalization of the outstar neural network for spatial pattern learning, is introduced. In the outstar, signals for a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar source node with a source field of arbitrarily many nodes, whose activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse, whereby a path weight decreases in joint proportion to the transmitted path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node?s activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals. Three synaptic transmission functions, a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all. When source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the unit of long-term memory in such a system in an adaptive threshold, rather than the multiplicative path weight widely used in neural models.

Topics: Machine Learning, Models: ART 1,

PDF download




Cross References