Advances in the field of AI are primarily based on the use of neural networks. Programming that models the human brain is known as neuromorphic computing. Dr. Markus Eppel, a group manager at Fraunhofer IIS, describes neural networks in microchips as “AI molded into hardware.”
Why should neuromorphic hardware be implanted into microchips?
Brains are neural networks. Biological brains, such as those in bees or humans, are wonders of nature that may exhibit different levels of complexity but are extremely specialized in pattern recognition. Using classical computer architectures to perform pattern recognition – for instance, to recognize a person in a photo – consumes inordinate amounts of time and energy. By comparison, a bee’s brain gets its bearings and moves through space by reacting to light signals within just a few milliseconds and using just a few microwatts of “brain power.” To achieve the same feat, an artificial aircraft like a drone needs many million times as much power and operates comparably slowly.
Dr. Eppel explains: “By mimicking very simple neural networks, we can achieve much more efficient and rapid pattern recognition than with conventional computer architecture. We’re nowhere near the complexity of a bee’s brain, and yet if we follow the right examples, we can achieve huge advances. These circuits are known as neuromorphic hardware, or inference accelerators, because they mimic the morphology of neural networks and can ‘infer’ answers from input data much more quickly and efficiently than conventional hardware can.”
Mixed-signal circuits – a mixture of analog and digital computing – can achieve even faster and more energy-efficient predictions. These combine the high performance and efficiency of analog circuit technology with the flexibility and accuracy of digital computing.
What is the advantage of mixed-signal computing?
“Mixed signal” means that analog and digital chip design are combined. One frequently used operation in digital computing – and especially in the modeling of neural networks – is matrix-vector multiplication. But repeated multiplication operations are energy-intensive. An analog circuitry approach allows the necessary addition and multiplication operations to be performed by directly applying Ohm’s law and Kirchoff’s circuit laws, which is much more efficient. To this end, Fraunhofer IIS has developed a special hardware-aware training for pattern recognition.
Fraunhofer IIS can draw on a broad spectrum of know-how and capabilities thanks to this expertise in mixed-signal circuit design as well as in hardware and software codesign. That’s the reason why its research work and results are uniquely attuned to the market.
What might future applications featuring this chip design look like?
The demands placed on neuromorphic hardware have gotten much more complex in recent years, and the focus is shifting – driven also by the boom in the internet of things – toward self-powered, AI-assisted sensor systems. Up to now, sensor data was mostly sent to the cloud and processed there. Hyper-efficient neuromorphic hardware like ADELIA opens up new options. By making it possible to process data locally on the device, they drastically reduce data traffic. What’s more, this approach radically cuts latency.