Three Princeton students (Ethan Gordon ’17, David Liu ’17 and Jeffrey Han ’17) used an NVIDIA Jetson plus OpenCV – an open-source real-time computer vision library – to build a system able to interpret sign language letters from a video feed in 250 lines of code. The system response time was reported to be “snappy” as the GPU-accelerated edge detection and least-squares matching algorithm found the best fit between the video image and a number of images in a library. Each image in the library – corresponding to a sign language hand position – was tagged with a letter that could be output to the screen.
To build the Princeton Hackathon code, see Ethan’s “ASLTegra” GitHub page.
You can read more about Ethan’s ASLTegra in the Daily Princetonian or the NVIDIA Blog.
To learn more about OpenCV on the Jetson TK1 board, see the followng video
Leave a Reply