Imagine a more sustainable future, where cell phones, smartwatches and other wearables don’t have to be shelved or tossed out for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip, like LEGO bricks embedded into an existing build. These reconfigurable chips could keep devices up to date while reducing our electronic waste.
MIT engineers have now taken a step towards this modular vision with a LEGO-like design for a stackable and reconfigurable artificial intelligence chip.
The design includes alternating layers of sensing and processing elements, as well as light-emitting diodes (LEDs) that allow the layers of the chip to communicate optically. Other modular chip designs use conventional wiring to relay signals between layers. Such complex connections are difficult, if not impossible, to cut and rewire, making these stackable designs non-reconfigurable.
MIT’s design uses light rather than physical wires to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped or stacked, for example to add new sensors or updated processors.
“You can add as many computational layers and sensors as you want, for example for light, pressure and even smell,” says Jihoon Kang, a postdoctoral fellow at MIT. “We call it a reconfigurable LEGO-like AI chip because it has unlimited extensibility depending on the combination of layers.”
The researchers are eager to apply the design to edge computing devices – autonomous sensors and other electronic devices that operate independently of any central or distributed resources like supercomputers or cloud computing.
“As we enter the era of the sensor network-based Internet of Things, the demand for advanced multi-function computing devices will dramatically increase,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide great versatility for edge computing in the future.”
The team’s results are published today in Natural electronics. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, and contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, along with collaborators from Harvard University, Tsinghua University, Zhejiang University, and Besides.
light the way
The team design is currently set up to perform basic image recognition tasks. It does this via a layering of image sensors, LEDs and processors made from artificial synapses – networks of memory resistors, or “memristors”, that the team previously developed, that work together like a network. physical neural, or “brain-on-a-chip.” Each array can be trained to process and classify signals directly on a chip, without the need for external software or an Internet connection.
In their new chip design, the researchers paired image sensors with networks of artificial synapses, each of which was trained to recognize certain letters – in this case, M, I and T. While an approach While conventional would be to relay signals from a sensor to a processor via physical wires, the team instead fabricated an optical system between each sensor and a network of artificial synapses to enable communication between the layers, without requiring a physical connection.
“Other chips are physically wired through metal, which makes them difficult to rewire and redesign, so you’ll have to create a new chip if you want to add a new function,” says MIT postdoc Hyunseok Kim. “We replaced that physical wired connection with an optical communication system, which gives us the freedom to stack and add chips as we wish.”
The team’s optical communication system consists of a pair of photodetectors and LEDs, each adorned with tiny pixels. Photodetectors constitute an image sensor to receive data and LEDs to transmit data to the next layer. When a signal (for example, the image of a letter) reaches the image sensor, the light pattern of the image encodes a certain configuration of LED pixels, which in turn stimulates another layer of photodetectors, as well than a network of artificial synapses, which classifies the signal based on the pattern and strength of the incoming LED light.
The team fabricated a single chip, with a computing core measuring about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks”, each comprising an image sensor, an optical communication layer and an array of artificial synapses to classify one of three letters, M, I or T. They then projected a pixelated image of random letters onto the chip and measured the electrical current that each neural network produced in response. (The greater the current, the more likely the image is to be the letter that the particular board is trained to recognize.)
The team found that the chip correctly classified clear images of each letter, but was less able to distinguish between blurry images, for example between I and T. However, the researchers were able to quickly swap the layer chip processing for better “denoising”, and found the chip and then accurately identified the images.
“We showed stackability, replaceability, and the ability to insert new function into the chip,” notes Min-Kyu Song, a postdoc at MIT.
The researchers plan to add more sensing and processing capabilities to the chip, and they envision limitless applications.
“We can add layers to a cellphone’s camera so it can recognize more complex images, or turn them into health care monitors that can be integrated into a wearable electronic skin,” Choi suggests, who, along with Kim, previously developed “smart” skin for life monitoring. panels.
Another idea, he adds, involves modular, electronics-embedded chips that consumers can choose to build with the latest “building blocks” of sensors and processors.
“We can build a general chip platform, and each layer could be sold separately as a video game,” says Jeehwan Kim. “We could create different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO.”
This research was funded, in part, by the Ministry of Trade, Industry and Energy (MOTIE) of South Korea; Korea Institute of Science and Technology (KIST); and Samsung’s global research outreach program.