Scientists at IBM Research have created by far the most advanced neuromorphic (brain-like) computer chip to date. The chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses across 4096 individual neurosynaptic cores. Built on Samsung’s 28nm process and with a monstrous transistor count of 5.4 billion, this is one of the largest and most advanced computer chips ever made. Perhaps most importantly, though, TrueNorth is incredibly efficient: The chip consumes just 72 milliwatts at max load, which equates to around 400 billion synaptic operations per second per watt — or about 176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches. Yes, IBM is now a big step closer to building a brain on a chip.
The animal brain (which includes the human brain, of course), as you may have heard before, is by far the most efficient computer in the known universe. As you can see in the graph below, the human brain has a “clock speed” (neuron firing speed) measured in tens of hertz, and a total power consumption of around 20 watts. A modern silicon chip, despite having features that are almost on the same tiny scale as biological neurons and synapses, can consume thousands or millions times more energy to perform the same task as a human brain. As we move towards more advanced areas of computing, such as artificial general intelligence and big data analysis — areas that IBM just happens to be deeply involved with — it would really help if we had a silicon chip that was capable of brain-like efficiency.
Enter TrueNorth, the culmination of the six-year-old SyNAPSE project at IBM Research. The work, which has been partly funded by DARPA since 2008, resulted in a prototype chip with just 256 neurons in 2011, and the Corelet programming language in 2013. This new chip is a second-generation version of the 2011 prototype, based on a new process (Samsung 28nm vs. IBM 45nm) and is orders of magnitude more complex, functional, and efficient. TrueNorth is implemented in standard CMOS transistors, just like the CPU in your PC — but that’s where the similarities end.
Each TrueNorth chip consists of 4096 neurosynaptic cores arranged in a 64×64 grid. Each core is self-contained, with 256 inputs (axons), 256 outputs (neurons), a big bank of SRAM (which stores the data for each neuron), and a router that allows for any neuron to transmit to any axon up to 255 cores away. Information flows across TrueNorth by way of neural spikes, from axons to neurons, modulated by the programmable synapses between them. This architecture is fundamentally based on Cornell Tech’s original work on asynchronous circuit design, which IBM has been refining since 2008. You would definitely call this anon-Von Neumann chip design.
With 256×256 (65,536) configurable synapses per core arranged in a crossbar array, and a 2D mesh network providing interconnectivity between the 4096 cores, we’re probably talking about the most massively parallel chip ever made — which is fitting, considering parallelism is one of the reasons animal brains are so effective. Oh, and did I mention that the TrueNorth chip itself can also be used in a symmetric multiprocessor (SMP) setup? IBM has already built a 16-chip system with 16 million neurons and 4 billion synapses.
If you’re looking for more details on TrueNorth, an IBM/Cornell Tech research paper is being published by Science today — but at the time of publishing we don’t have the link. We’ll update this story when we have it. If you’re looking for a less technical breakdown of TrueNorth and neuromorphic computing, IBM provided us with a rather nice infographic.
One of the key problems with developing a new chip based on a novel architecture is that you also have to create developer tools and software that actually make efficient use of those thousands of cores and billions of synapses. Fortunately, IBM’s already got that covered: Last year it released a specialized programming language (Corelet) and simulator (Compass) that let you program and test your neuromorphic programs before running them on actual hardware.
So, why should I care about TrueNorth?
Ultimately, the main purpose of the SyNAPSE project is to take existing systems that simulate the functionality of the brain in software — such as deep neural networks — and run them on hardware that was specifically designed for the task. As you may already know, dedicated hardware tends to orders of magnitude more efficient than simulating/emulating the hardware in software on a general-purpose CPU. This is why IBM is touting some utterly incredible efficiency figures for TrueNorth. For neural networks with high spike rates and a large number of active synapses, TrueNorth can deliver 400 billion synaptic operations per second (SOPS) per watt. When running the exact same neural network, a general-purpose CPU is 176,000 times less energy efficient, while a state-of-the-art multiprocessor neuromorphic system (48 chips, each with 18 cores) is 769 times less efficient. While it’s not directly comparable, the world’s most efficient supercomputer only manages around 4.5 billion FLOPS per watt.
At this point, it’s worth noting that TrueNorth is pretty much ready to go for commercial applications. On the data center/supercomputer side of things, IBM already has dozens of big data solutions — such as Watson — that could be dramatically enhanced by TrueNorth. For consumers, the fact that TrueNorth consumes much less power than conventional Von Neumann chips could be significant. While TrueNorth isn’t going to run your operating system any time soon, it would make a fantastically efficient coprocessor to handle sensor input, computer vision, AI (self-driving cars), and other emerging spheres in personal/wearable computing.
Neural networks are fantastic things, but historically they operate on hugely inefficient clusters of conventional computers. With TrueNorth’s truly novel architecture, that changes — with TrueNorth, IBM is now a big step closer to building a brain on a chip, and that could be big news for the future of computing.