IBM unveiled the first neurosynaptic computer chip on August 7th that implements one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. The IBM announcement, published in Science in collaboration with Cornell Tech, is a step towards bringing cognitive computers to society.At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW—orders of magnitude less power than a modern microprocessor. A neurosynaptic supercomputer the size of a postage stamp runs on the energy equivalent of a hearing-aid battery. The new cognitive chip architecture has an on-chip two-dimensional mesh network of 4096 digital, distributed neurosynaptic cores, where each core module integrates memory, computation, and communication, and operates in an event-driven, parallel, and fault-tolerant fashion. To enable system scaling beyond single-chip boundaries, adjacent chips, when tiled, can seamlessly connect to each other—building a foundation for future neurosynaptic supercomputers. To demonstrate scalability, IBM also revealed a 16-chip system with sixteen million programmable neurons and four billion programmable synapses. The SyNAPSE chip is a component of a complete end-to-end vertically integrated ecosystem spanning a chip simulator, neuroscience data, supercomputing, neuron specification, programming paradigm, algorithms and applications, and prototype design models. The ecosystem supports all aspects of the programming cycle from design through development, debugging, and deployment. Interested in getting involved? Click Here Summary from the Science Magazine article “The Brain Chip”
All computer chips made today rely on the same general architecture that was outlined nearly 70 years ago. This architecture separates the two primary tasks a chip needs to carry out—processing and memory—into different regions and continuously communicates data back and forth. Though this strategy works well for crunching numbers and running spreadsheets, it’s much less efficient for handling tasks that manage vast amounts of data, such as vision and language processing. But in recent years, researchers around the globe have been pursuing a new approach called neuromorphic computing. On page 668 of this issue, researchers at IBM and Cornell University report creating the world’s first production-scale neuromorphic computing chip. The novel approach to hardware is made up of 5.4 billion transistors that are wired to emulate a brain with 1 million “neurons” that talk to one another via 256 million “synapses.” The novel chips could revolutionize efforts in everything from helping computers and robots sense their environment to offering new tools to help blind people navigate their surroundings
Abstract from Science Report “A million spiking-neuron integrated circuit with a scalable communication network and interface”
Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.
TED talk on Cognitive Computing http://youtu.be/np1sJ08Q7lw The future of cognative computing http://youtu.be/Q3e4q2wTOOQ Engadget Video of visual applications http://youtu.be/8a3Bv66O9Eo A collection of five IBM SyNAPSE videos http://youtu.be/BnTUOEwOKYA?list=PL87B295BB0978CC97 Comparative and alternative projects:
- Carver Meade’s original artificial retina Scientific American article
- Deep-learning Teaching Code Achieves 13 PF/s on the ORNL Titan Supercomputer
- Click here for more TechEnablement machine-learning articles and tutorials!
- Click here for more TechEnablement synapse articles!
Leave a Reply