Skip to content

Energy-Conserving Artificial Intelligence Network

Enhancing the capabilities of neural networks for tasks like visual recognition or autonomous vehicle navigation might soon necessitate reduced computational resources.

Energy-Saving Artificial Intelligence Network
Energy-Saving Artificial Intelligence Network

Energy-Conserving Artificial Intelligence Network

In a groundbreaking development, researchers at the University of California San Diego have created a new artificial neuron device. Led by Professor Shadi Dayeh, the team is working on hardware implementations of energy-efficient artificial neural networks.

The unique feature of this device lies in its ability to induce a very gradual resistance change through heating a nanowire on top of a nanometers-thin layer of vanadium dioxide. This material undergoes a Mott transition, allowing the device to gradually switch from an insulating to a conducting state, with a little bit of heat.

The device implements one of the most commonly used activation functions in neural network training called a rectified linear unit. This function, which requires hardware that can undergo a gradual change in resistance to work, is something the new device is capable of.

The device's architecture involves an array of activation (or neuron) devices and a synaptic device array. Generating the input in a neural network involves applying a mathematical calculation called a non-linear activation function, which the device can efficiently carry out.

By stacking more of these layers together, a more complex system could be created for various applications. This proof of concept currently only includes one synapse layer with one activation layer, but the potential for further development is promising.

The researchers integrated the two arrays on a custom printed circuit board to create a hardware version of a neural network. The integrated hardware system was used to process an image, specifically a picture of Geisel Library at UC San Diego. The experiment demonstrated that the integrated hardware system can perform convolution operations essential for many types of deep neural networks.

The network performed a type of image processing called edge detection, which identifies the outlines or edges of objects in an image. This is a significant step towards the technology's potential use in complex tasks such as facial and object recognition, potentially in self-driving cars.

The researchers express interest in further collaboration with the industry to advance this technology. With its ability to run neural network computations using 100 to 1000 times less energy and area than existing CMOS-based hardware, this new device could revolutionise the field of artificial intelligence.

Read also: