• Home
  • News
  • Tutorials
  • Analysis
  • About
  • Contact

TechEnablement

Education, Planning, Analysis, Code

  • CUDA
    • News
    • Tutorials
    • CUDA Study Guide
  • OpenACC
    • News
    • Tutorials
    • OpenACC Study Guide
  • Xeon Phi
    • News
    • Tutorials
    • Intel Xeon Phi Study Guide
  • OpenCL
    • News
    • Tutorials
    • OpenCL Study Guide
  • Web/Cloud
    • News
    • Tutorials
You are here: Home / Featured tutorial / GaussianFace: Computers Claimed to Beat Humans in Recognizing Faces

GaussianFace: Computers Claimed to Beat Humans in Recognizing Faces

April 29, 2014 by Rob Farber Leave a Comment

In a human vs. computer test on 13k photos of 6k public figures, the GaussianFace project claims to identify human faces better than humans (97% human accuracy vs. 98% computer accuracy). The authors claim their model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. The reporters at The Register summarize the GaussianFace process as,

“GaussianFace normalises each pic into a 150×120 pixel image, and uses five landmarks – two eyes, the nose, and the corners of the mouth – as the basis for the image transform. It then creates 25 x 25 overlapping tiles in the image, and captures a vector of each patch, which can be used for recognition.”

GaussianFace

For those interested in trying facial recognition, read the paper and then use my PF/s-capable machine-learning code, “Deep-learning Teaching Code Achieves 13 PF/s on the ORNL Titan Supercomputer“. Note the authors collected a large set of faces from the Internet, so you too can create a large training set. Of course, it will be difficult to compete with Google, Facebook, Twitter, and many other companies who are actively encouraging people to submit labeled images of themselves, but you can very quickly access the data needed for experimentation. Example set of images from the GaussianFace paper: Sample faces used by GaussianFace

A note of caution about machine-learning in general

The following commentary does not reflect – in any way – an opinion about GaussianFace or the results published by the GaussianFace authors. Be aware if you try to develop your own facial recognition software that it is very easy for people to deceive themselves into believing that the computer correctly performed some complex recognition task. A classic example of people misinterpreting the output of a “deep-learning” classifier occurred in the late 1980s when researchers at a major US National Laboratory trained a neural network using a set of pictures of tanks and cars. They were gratified to see that the machine-learning algorithm correctly identified most of the pictures – and that the neural network would correctly identify images that the machine had never seen before. Further testing showed that the machine-learning algorithm performed abysmally in the field. Later analysis determined that the pictures of the tanks were largely taken on a cloudy day while the car pictures were generally taken on a sunny day. Thus the complex “deep-learning” algorithm was actually deciding how bright the light was in the picture. It was the people who supplied the interpretation, “Look we found a tank!” Validation is key and will hopefully prevent any future automated drone flying over the White House from suddenly deciding that your car driving past the White House is actually a tank in close proximity to the President … merely because the sun went behind a cloud. My Scientific Computing article, “Validation: Assessing the Legitimacy of Computational Results” notes in particular,

“With a myriad of computer vision research projects and companies making assertions and claims, the noise level in computer vision research appears to be increasing.”

Thus it really pays to be skeptical of what a machine learning algorithm is telling you! It is worth saying multiple times, “Validation is essential to good science”.

 

Click here for more TechEnablement machine-learning articles and tutorials!

Share this:

  • Twitter

Filed Under: Featured tutorial, Tutorials Tagged With: CUDA, deep-learning, GPU, HPC, machine-learning

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tell us you were here

Recent Posts

Farewell to a Familiar HPC Friend

May 27, 2020 By Rob Farber Leave a Comment

TechEnablement Blog Sunset or Sunrise?

February 12, 2020 By admin Leave a Comment

The cornerstone is laid – NVIDIA acquires ARM

September 13, 2020 By Rob Farber Leave a Comment

Third-Party Use Cases Illustrate the Success of CPU-based Visualization

April 14, 2018 By admin Leave a Comment

More Tutorials

Learn how to program IBM’s ‘Deep-Learning’ SyNAPSE chip

February 5, 2016 By Rob Farber Leave a Comment

Free Intermediate-Level Deep-Learning Course by Google

January 27, 2016 By Rob Farber Leave a Comment

Intel tutorial shows how to view OpenCL assembly code

January 25, 2016 By Rob Farber Leave a Comment

More Posts from this Category

Top Posts & Pages

  • Face It: AI Gets Personal to Make You Look Better!
  • CUDA Study Guide
  • PyFR: A GPU-Accelerated Next-Generation Computational Fluid Dynamics Python Framework
  • Paper Compares AMD, NVIDIA, Intel Xeon Phi CFD Turbulent Flow Mesh Performance Using OpenMP and OpenCL
  • Up To Orders of Magnitude More Performance with Intel's Distribution of Python

Archives

© 2023 · techenablement.com