The promise of analog AI fuels neural network hardware pipeline

0

Some of the best circuits for driving AI in the future may be analog as well as non-digital, and research teams around the world are increasingly developing new devices to support such analog AI.

The most basic computation in deep neural networks that is driving the current explosion of AI is the Multiplication Accumulation Operation (MAC). Deep neural networks are made up of layers of artificial neurons, and in MAC operations, the output of each of these layers is multiplied by the values ​​of the forces or “weights” of their connections to the next layer, which then sums up those contributions. .

Modern computers have digital components dedicated to MAC operations, but analog circuits can theoretically perform these calculations for orders of magnitude less power. This strategy, known as analog AI, in-memory computing, or in-memory processing, often performs these multiple accumulation operations using non-volatile memory devices such as flash memory, magnetoresistive RAM ( MRAM), Resistive RAM (RRAM), phase change memory (PCM) and even more esoteric technologies.

A team in Korea, however, is exploring neural networks based on praseodymium manganese-calcium oxide Electrochemical RAM (ECRAM) devices, which act like miniature batteries, storing data in the form of changes in their conductance. Lead author of the study, Chuljun Lee, from Pohang University of Science and Technology in Korea, notes that neural network hardware often has different requirements during training and during applications. For example, low energy barriers help neural networks learn quickly, but high energy barriers help them retain what they have learned for use in applications.

“Heating their devices to almost 100 ° C during training brought out the characteristics that are good for training,” explains the electrical engineer. Jean-Paul Strachan, head of the Peter Grünberg Institute for Neuromorphic Computing Nodes at the Jülich Research Center in Germany, which did not participate in this study. “When it cooled down, they got the benefits of longer retention and lower current operation. By simply adjusting a knob, the heat, they could see improvements across several dimensions of the computer science.” Researchers detailed their findings at the annual IEEE International Meeting of Electronic Devices (MEI) in San Francisco on December 14.

A key question facing this work is the type of deterioration this ECRAM may experience after multiple heating and cooling cycles, Strachan notes. Still, “it was a very creative idea, and their work is proof of concept that there might be some potential with this approach.”

Another group investigated Ferroelectric Field Effect Transistors (FEFETs). Lead author of the study, Khandker Akif Aabrar, University of Notre Dame, explained that FEFETs store data as an electrical bias in each transistor.

One challenge that FEFETs face is whether they can still display the analog behavior valuable to AI applications when scaled down, or if they will abruptly switch to a binary mode where they only store one bit of d information, with the polarization of one state or the other. .

“The strength of this team’s work lies in its understanding of the materials involved,” explains Strachan, who was not involved in this research. “A ferroelectric material can be thought of as a block made up of many small domains, just as a ferromagnetic can be thought of as high and low domains. For the analog behavior they want, they want all of those domains to slowly align towards up or down.in response to an applied electric field, and not get a runaway process where they all go up or down at the same time. So they physically broke their ferroelectric superlattice structure with multiple dielectric layers to reduce this runaway process. ”

The system achieved an e-learning accuracy of 94.1%, which compares very well to other FEFET and RRAM technologies, results scientists detailed on December 14 at the MEI conference. Strachan notes that future research may seek to optimize properties such as current levels.

A new electronic chip from scientists in Japan and Taiwan made using Crystalline gallium-indium zinc oxide aligned on the c-axis. Study co-author Satoru Ohshita, of Semiconductor Energy Laboratory Co. in Japan, notes that their Oxide Semiconductor Field Effect Transistors (OSFETs) displayed very low current operations of less than 1 nano -ampere per cell and operating efficiencies of 143.9 trillion operations per second per watt, the best reported to date in analog AI chips, results were detailed on Dec. 14 at the MEI conference . “These are extremely low current devices,” says Strachan. “Since the currents needed are so low, you can make the circuit blocks bigger: they get arrays of 512 by 512 memory cells, whereas typical numbers for RRAM are more like 100 by 100. Weight advantage they get. store. “When OSFETs are combined with capacitors, they can retain information with over 90% accuracy for 30 hours. “It could be long enough to transfer this information to less volatile technology – dozens of hours of retention is not a deciding factor,” Strachan said. Overall, “These new technologies that researchers are exploring are all proof of concept cases that raise new questions about the challenges they might face in their future,” Strachan said. “They also show a path to the foundry, which they need for high volume, low cost commercial products. ”

Share.

About Author

Comments are closed.