Every computer system you own, whether it be a laptop, cell phone, video game console, or even a smart fridge, contains a basic hardware design that dates to the 1940s – a time when computers were built to perform very simple, predefined tasks. Back then, programming new functions on your computer, let alone teaching your computer to program itself, was a distant dream. The computers of today have experienced major advancements, but are still computationally inefficient, energy-hungry machines. This relic of computer design, called von Neumann architecture is being challenged by a more energy-efficient paradigm known as neuromorphic computing, which is based on the principle of designing computers more similarly to the human brain. The primary difference between these two architectures is how they spatially store and process data.
To better understand the differences in computer architecture, we can consider the routines of two hypothetical bakers named von Neumann and Brain Cell. von Neumann is a baker whose kitchen is connected to their pantry by a long corridor. They keep their stand mixer chained to the kitchen counter while the pantry contains all the ingredients they need to bake a cake. In order to make the cake batter, von Neumann must go into the pantry to retrieve an ingredient, but they can only carry one ingredient at a time. After adding the ingredient to the mixing bowl, they must walk back to return the item and select the next one. von Neumann must work quickly and eat many more calories to keep up with their baker friend Brain Cell, an alien with 10,000 arms that can grab ingredients from the many cabinets within reach in the kitchen.
The human brain remains the world’s the most energy-efficient computer despite decades of technological advances. Each brain cell, or neuron, is its own computational sub-unit in the brain, and each neuron connects to 10,000 other neurons through chemical channels called synapses. Memory is related to how often two neurons pass chemical signals to each other through their synapses; the more often they communicate, the stronger the memory linkage. The efficiency of the brain lies in the 100 trillion synaptic connections it can use for rapid communication and storage of information throughout different parts of the brain and body.
In contrast, von Neumann computer architecture spatially separates data processing and memory storage in two different computer chips. The critical sub-units of these chips are transistors, which perform mathematical operations in the processor or enable the reading and writing of data in memory. While technological advancements continue to improve data processing speeds and memory storage capacity by shrinking the size of transistors, one limitation that cannot be overcome is the efficiency with which the processor chip and memory chip communicate with each other.
Now, von Neumann architecture isn’t entirely terrible – it’s been used for decades, from landing Apollo 11 on the moon to streaming funny cat videos at 2 AM. Computational problems that are solved through step-by-step programming (like following a well-loved recipe to bake a cake) perform well in conventional computers. However, our expectations for computer performance continue to grow; these days, we want artificially intelligent computers that can design a brand-new recipe for amazing chocolate cake. That’s where neuromorphic computing really finds its niche. Yet, efforts to develop neuromorphic computer circuits have continued to rely on the same transistors used in conventional computers, leading to complicated circuits that still use a lot of energy. For example, MIT researchers used 400 transistors to simulate a single synapse. What if there were a single device that could match the behavior of a neuron?
Scientists and engineers believe this ideal device exists as a memristor – a contraction of memory and resistor – which is a basic circuit element. While scientists theorized about such a device in the 1970s, it wasn’t until 2008 that researchers proved that a memristor could physically exist. This breakthrough required a team of materials physicists and electrical engineers to puzzle out how to harness the motion of atoms in crystalline materials to create an electron-powered, neuron-like memristor.
Each memristor acts as a memory storage device by saving information through the arrangement of its atoms. The prototypical memristive material relies on two types of atomic structures: ordered and disordered. By applying voltage to the memristor, and thereby creating a difference in electric potential across the device, the engineer can coax atoms to move between ordered and disordered states. By switching between these states, engineers can “write” and “erase” memory in a memristive material. Because the atoms don’t move without applying a voltage, the material “remembers” its most recent arrangement of atoms. Therefore, unlike many memory storage components in modern von Neumann computers, memristors don’t require power to maintain memory. This enables a new energy-saving, electron-based form of data storage.
Memristor-based devices can also perform data processing functions. Through careful materials design, engineers can construct memristors with transistor-like properties, forming a so-called memtransistor. This hybrid device can retain memory as well as perform calculations on input electrical signals, mimicking neurons with many synaptic connections. In this way, the memtransistor eliminates the energy-inefficient spatial separation between memory and logic processes in computing systems.
While a number of promising memristor-based devices have already been developed in just the past decade, integrating these devices into commercial neuromorphic computing chips will take several more years of research and development. One big challenge is increasing the switching speed between the ordered and disordered atomic states, which limits how quickly memory can be written and erased. Ongoing research aims to solve this problem by designing and testing a variety of materials that can shift between ordered and disordered atomic arrangements. Combining memristor-based neuromorphic hardware with the software innovation of neural networks may provide the one-two punch needed to propel AI to the next level. Using every opportunity to design neuromorphic computers in more computationally- and energy-efficient ways will speed up the time to commercialization. So, who knows? Perhaps 15 years from now, the world’s top pastry chefs will be asking Siri for advice in designing new cake recipes.