In an attempt to create a super-navigation system, Queensland University of Technology research Dr.Michael Milford is combining human vision and rat spatial recognition computer models to solve the problem of place recognition far better than the solutions evolved by Mother Nature. Advances in computational technology coupled with the proliferation of autonomous vehicles and self-driving car research projects have created a demand – and funding opportunities – for “holy grail” plug-n-play navigation systems. Just as humans optimized flight by first attempting to mimic the flapping wings of birds then moving to fixed and eventually supersonic wing structures, hybrid (or Frankenstein) neural models can potentially be combined to create systems that are more efficient than those originally devised by nature.
According to Dr. Milford, “This project will revolutionise our understanding of how humans and animals use vision to determine their location in the world,’’ he said. “This understanding will lead to new computer algorithms that enable robots to navigate in any environmental conditions using cheap visual sensors and breakthroughs in our knowledge of the brain.’’
Time will tell but the recent ARC Future Fellowship award to Dr. Milford’s team, worth $676,174, indicates the interest in this approach to create a plug-n-play optimized navigation unit.
Abstract from the paper, “Principles of goal-directed spatial robot navigation in biomimetic models”
Mobile robots and animals alike must effectively navigate their environments in order to achieve their goals. For animals goal-directed navigation facilitates finding food, seeking shelter or migration; similarly robots perform goal-directed navigation to find a charging station, get out of the rain or guide a person to a destination. This similarity in tasks extends to the environment as well; increasingly, mobile robots are operating in the same underwater, ground and aerial environments that animals do. Yet despite these similarities, goal-directed navigation research in robotics and biology has proceeded largely in parallel, linked only by a small amount of interdisciplinary research spanning both areas. Most state-of-the-art robotic navigation systems employ a range of sensors, world representations and navigation algorithms that seem far removed from what we know of how animals navigate; their navigation systems are shaped by key principles of navigation in ‘real-world’ environments including dealing with uncertainty in sensing, landmark observation and world modelling. By contrast, biomimetic animal navigation models produce plausible animal navigation behaviour in a range of laboratory experimental navigation paradigms, typically without addressing many of these robotic navigation principles. In this paper, we attempt to link robotics and biology by reviewing the current state of the art in conventional and biomimetic goal-directed navigation models, focusing on the key principles of goal-oriented robotic navigation and the extent to which these principles have been adapted by biomimetic navigation models and why.


Leave a Reply