• Home
  • News
  • Tutorials
  • Analysis
  • About
  • Contact

TechEnablement

Education, Planning, Analysis, Code

  • CUDA
    • News
    • Tutorials
    • CUDA Study Guide
  • OpenACC
    • News
    • Tutorials
    • OpenACC Study Guide
  • Xeon Phi
    • News
    • Tutorials
    • Intel Xeon Phi Study Guide
  • OpenCL
    • News
    • Tutorials
    • OpenCL Study Guide
  • Web/Cloud
    • News
    • Tutorials
You are here: Home / Featured news / Moving Brain Models Beyond Mother Nature For Robotic Navigation

Moving Brain Models Beyond Mother Nature For Robotic Navigation

October 6, 2014 by Rob Farber Leave a Comment

In an attempt to create a super-navigation system, Queensland University of Technology research Dr.­Michael Milford is combining human vision and rat spatial recognition computer models to solve the problem of place recognition far  better than the solutions evolved by Mother Nature. Advances in computational technology coupled with the proliferation of autonomous vehicles and self-driving car research projects have created a demand – and funding opportunities – for “holy grail” plug-n-play navigation systems.  Just as humans optimized flight by first attempting to mimic the flapping wings of birds then moving to fixed and eventually supersonic wing structures, hybrid (or Frankenstein) neural models can potentially be combined to create systems that are more efficient than those originally devised by nature.

Supplied Editorial QUT robotics researcher Dr Michael Milford

Picture From “The Australian” of Dr. ­Michael Milford

According to Dr. Milford, “This project will revolutionise our understanding of how humans and animals use vision to determine their location in the world,’’ he said. “This understanding will lead to new computer ­algorithms that enable robots to navigate in any environmental conditions using cheap visual sensors and breakthroughs in our knowledge of the brain.’’

Time will tell but the recent ARC Future Fellowship award to Dr. Milford’s team, worth $676,174, indicates the interest in this approach to create a plug-n-play optimized navigation unit.

Abstract from the paper, “Principles of goal-directed spatial robot navigation in biomimetic models”

Philosophical Transactions of the Royal Society B: Biological Sciences

Mobile robots and animals alike must effectively navigate their environments in order to achieve their goals. For animals goal-directed navigation facilitates finding food, seeking shelter or migration; similarly robots perform goal-directed navigation to find a charging station, get out of the rain or guide a person to a destination. This similarity in tasks extends to the environment as well; increasingly, mobile robots are operating in the same underwater, ground and aerial environments that animals do. Yet despite these similarities, goal-directed navigation research in robotics and biology has proceeded largely in parallel, linked only by a small amount of interdisciplinary research spanning both areas. Most state-of-the-art robotic navigation systems employ a range of sensors, world representations and navigation algorithms that seem far removed from what we know of how animals navigate; their navigation systems are shaped by key principles of navigation in ‘real-world’ environments including dealing with uncertainty in sensing, landmark observation and world modelling. By contrast, biomimetic animal navigation models produce plausible animal navigation behaviour in a range of laboratory experimental navigation paradigms, typically without addressing many of these robotic navigation principles. In this paper, we attempt to link robotics and biology by reviewing the current state of the art in conventional and biomimetic goal-directed navigation models, focusing on the key principles of goal-oriented robotic navigation and the extent to which these principles have been adapted by biomimetic navigation models and why.

Share this:

  • Twitter

Filed Under: Analysis, Featured news

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tell us you were here

Recent Posts

Farewell to a Familiar HPC Friend

May 27, 2020 By Rob Farber Leave a Comment

TechEnablement Blog Sunset or Sunrise?

February 12, 2020 By admin Leave a Comment

The cornerstone is laid – NVIDIA acquires ARM

September 13, 2020 By Rob Farber Leave a Comment

Third-Party Use Cases Illustrate the Success of CPU-based Visualization

April 14, 2018 By admin Leave a Comment

More Tutorials

Learn how to program IBM’s ‘Deep-Learning’ SyNAPSE chip

February 5, 2016 By Rob Farber Leave a Comment

Free Intermediate-Level Deep-Learning Course by Google

January 27, 2016 By Rob Farber Leave a Comment

Intel tutorial shows how to view OpenCL assembly code

January 25, 2016 By Rob Farber Leave a Comment

More Posts from this Category

Top Posts & Pages

  • SC15 - Accelerator Use in World’s Top Supercomputers
  • Learn how to program IBM's 'Deep-Learning' SyNAPSE chip
  • NVIDIA Tegra K1 Powered Shield Should Soon Be Available
  • OpenACC Adoption Continues to Gain Momentum in 2016
  • Rob Farber

Archives

© 2026 · techenablement.com