Where analog systems excel is in speed and efficiency. Because they don't need to break calculations down into long strings of binary code — instead representing them as physical operations on the chip's circuitry — analog chips can handle large volumes of information simultaneously while using far less energy.
This becomes particularly significant in data- and energy-intensive applications like AI, where digital processors face limitations in how much information they can process sequentially, as well as in future 6G communications — where networks will have to process huge volumes of overlapping wireless signals in real time.
RELATED STORIES
—'Crazy idea' memory device could slash AI energy consumption by up to 2,500 times
—'Rainbow-on-a-chip' could help keep AI energy demands in check — and it was created by accident
—Scientists create ultra-efficient magnetic 'universal memory' that consumes much less energy than previous prototypes
The researchers said that recent advances in memory hardware could make analog computing viable once again. The team configured the chip's RRAM cells into two circuits: one that provided a fast but approximate calculation, and a second that refined and fine-tuned the result over subsequent iterations until it landed on a more precise number.
Configuring the chip in this way meant that the team was able to combine the speed of analog computation with the accuracy normally associated with digital processing. Crucially, the chip was manufactured using a commercial production process, meaning it could potentially be mass-produced.
Future improvements to the chip's circuitry could boost its performance even more, the researchers said. Their next goal is to build larger, fully integrated chips capable of handling more complex problems at faster speeds.