A self-organizing neural network architecture for navigation using optic flow

Author(s): Cameron, S. | Grossberg, S. | Guenther, F.H. |

Year: 1998

Citation: Neural Computation,10, 313-352

Abstract: This article describes a self-organizing neural network architecture that transforms optic flow and eye position information into representations of heading, scene depth, and moving object locations. These representations are used to navigate reactively in simulations involving obstacle avoidance and pursuit of a moving target. The network s weights are trained during an action-perception cycle in which self-generated eye and body movements produce optic flow information, thus allowing the network to tune itself without requiring explicit knowledge of sensor geometry. The confounding effect of eye movement during translation is suppressed by learning the relationship between eye movement outflow commands and the optic flow signals that they induce. The remaining optic flow field is due to only observer translation and independent motion of objects in the scene. A self-organizing feature map categorizes normalized translational flow patterns, thereby creating a map of cells that code heading directions. Heading information is then recombined with translational flow patterns in two different ways to form maps of scene depth and moving object locations. Most of the learning processes take place concurrently and evolve through unsupervised learning. Mapping the learned heading representations onto heading labels or motor commands requires additional structure. Simulations of the network verify its performance using both noise-free and noisy optic flow information.

Topics: Biological Vision, Machine Learning, Models: Self Organizing Maps, Other,

PDF download




Cross References