Instar learning law (Grossberg, 1976) governs the dynamics of feedforward connection weights in a standard competitive neural network in an unsupervised manner. This learning models how a neuron can become selectively responsive, or tuned, to a particular input pattern, i.e., a feature detector.
Below are links to source article, tutorial, and zipped file that contains a MATLAB-based graphical user interface with additional access to the instar learning law equation, description, and source code.
The microcircuit for instar learning law shows how the dynamics of feedforward connection weights are governed in a standard competitive neural network in an unsupervised manner. This learning models how a neuron can become selectively responsive, or tuned, to a particular input pattern, i.e., a feature detector. An example simulation allows users to see how afferent weights to a node in the coding field can eventually become similar to the input activation pattern; i.e., they can track the input features over time. This law incorporates Hebbian learning and post-synaptically gated decay. Typically learning occurs only for weights that converge on active nodes in the coding field. However, learning can be further confined to weights projecting to the most active node in the coding field assuming winner-taking-all coding in the network in order to promote stable memories. This is called competitive learning.
[ http://techlab.bu.edu/MODE/instar_tutorial.ppt ] The tutorial is a self-contained power point presentation that introduces the instar learning law.
To use the software for the instar learning law, download the package (Instar_GUI_070109.zip) from the Download(s) below and unzip the contents into a local folder. Open MATLAB and change the current directory to the folder. At the command prompt, type instargui to begin using the software via a graphical user interface.
Any operating system that can support MATLAB
Praveen K. Pilly