Driving the efficiency of artificial intelligence

February 20, 2025 | Neural architecture search – a search engine for energy-efficient AI

A problem with many AI networks is that they’re good — really good — but sometimes a little big and overly precise. That means a lot of energy, memory, and computing time are spent on complex calculations, even though simpler models could solve the task at hand more efficiently. But how do you turn an overly complicated neural network into a compact artificial intelligence (AI) that also runs reliably on small devices, for example, in a headset for sleep monitoring? This is where neural architecture search comes into play.

Designing neural networks is not unlike drawing up complicated architectural plans: it requires patience. A saving in one area increases complexity in another. An improvement there leads to ineffective use of space elsewhere. Coming up with the optimum design means making a lot of manual adjustments, and the simplest solution is easily overlooked. However, there are ways to automate this process using neural architecture search, or NAS for short. This greatly simplifies and accelerates the design process, which lets companies save money and resources in development and bring products to market faster.

NAS puts the search for the perfect neural network in the hands of computers. In a defined search space, they can use their computing power and a well-configured search strategy to recognize much more quickly which variation of a neural network offers the best performance, is the most energy-efficient, or requires particularly little memory — in each case for a specific purpose, such as object, speech, or pattern recognition in a particular healthcare or industrial application. 

Tailored to the hardware: AI optimization for the edge

 

Certain AI applications deliver their full benefits only when neural networks are processed directly within the product. That way, the data doesn’t have to migrate to the cloud, but instead is processed in real time in the devices at the edge. Because the AI has to operate on these devices’ chips or microcontrollers, it faces certain limitations, so the aim is to design a network that achieves the best results given the characteristics of the hardware. To do this, Michael Rothe’s team in the Communication Systems division is developing new procedures and tools to make NAS particularly hardware-aware.

This development work is part of MANOLO, a project funded by the European Union to improve the efficiency and performance of AI systems. One of the use cases for NAS that Fraunhofer IIS is currently testing with project partner Bitbrain under the auspices of MANOLO is sleep monitoring using a headset that measures brain waves. The headset can be used both in the sleep lab and by end users at home. Since this calls for small, energy-efficient hardware, the team decided to use the analog AI accelerator chip ADELIA. The neural network that is needed here must above all be adapted to the chip’s limited data width and memory size. In the case of ADELIA, the network in question must also be taught during data training to deal with inaccuracies in the calculations caused by the noise that occurs in analog signal processing.

© Photo courtesy of Bitbrain
© Photo courtesy of Bitbrain
© Photo courtesy of Bitbrain

The search for the perfect neural network for perfect sleep

 

This is how sleep monitoring with Bitbrain’s EEG headband has worked to date: the sensors integrated into the headband record the brain waves of sleeping people and use Bluetooth to transmit the signals to a nearby computer, where software uses a neural network to evaluate the signals and determine the duration and type of the different sleep phases. Because the data has to be transmitted, there is a delay in analyzing it. But recent developments in neurotechnology aim to stimulate the brain during sleep to improve certain cognitive functions such as memory. This involves playing certain sounds to sleepers at the right time. For the headset to recognize the sleep phases in real time and then play sounds, the AI must be integrated into the headset. This allows the device to expand its range of functions from simply monitoring sleep to achieving a targeted improvement in sleep quality. With this in mind, NAS is being used to make the current neural network more compact and energy-efficient. That way, the AI calculations can be processed directly in the device. 

 

AI optimization with NAS is based not only on the specific parameters of the hardware but also on the target variables that are decisive for the planned application. Optimization becomes particularly complex when there are many targets that must be met simultaneously. In the case of the EEG headset, the key considerations are its memory requirements, energy consumption, latency, and accuracy: Having run on a computer until now, the neural network needs to be adapted to the limited computing resources of the edge hardware. To this end, it is highly compressed in order to keep the memory requirement low. At the same time, it needs to have low energy consumption because the headset’s battery has to last all night. Low latency is also important to make sure the auditory stimuli are played at the right time. And the calculations must be reliable so that the headset recognizes the sleep phases correctly.

A search engine for neural networks


The trick is to adapt the neural network to make it as simple as possible and yet still capable of delivering powerful results. The NAS toolset developed is geared toward this multi-objective optimization. First, it generates various modifications of the network. Next, it calculates their performance in advance in terms of memory requirements, energy consumption, latency, and accuracy. On this basis, it then selects the network that best meets the requirements. In effect, the researchers are developing a kind of search engine for neural networks. Ultimately, this is how it should work: the user enters the type of neural network required, defines the hardware conditions under which it is to run, and determines the criteria according to which it is to be optimized. The result is an appropriately adapted network that is ideally tailored to the hardware.

Once it has been set up, the toolset developed can be adjusted to work for other healthcare and industrial applications and their specific requirements. This means edge devices with integrated AI will be ready to go sooner. At the same time, NAS can specifically reduce their energy consumption, which helps make the development of AI more sustainable.

Das könnte Sie auch interessieren

 

Embedded Machine Learning

 

MANOLO – Cloud-Edge Efficient & Trustworthy AI

 

ADELIA: Analog technology creates efficient AI accelerator

 

Communication Systems

 

Series: Artifical Intelligence

 

Series: Sustainability

Contact

Your questions and suggestions are always welcome.

Please do not hesitate to email us.

Stay connected

Newsletter

Sign up for the the Fraunhofer IIS Magazine newsletter and get the hotlist topics in one email.

Homepage

Back to homepage of Fraunhofer IIS Magazine