AI’s Brainy Future: Mimicking Biology for Smarter, More Efficient Tech
The brain of the tiny worm Caenorhabditis elegans, about the width of a human hair, coordinates complex foraging movements with remarkable efficiency. This has captivated Daniela Rus, a computer scientist at MIT, who co-founded Liquid AI to build a new type of artificial intelligence inspired by the worm’s brain. She’s part of a growing movement of researchers who believe that making AI more brainlike could lead to leaner, more agile, and potentially more intelligent technology.
“To improve AI truly, we need to… incorporate insights from neuroscience,” says Kanaka Rajan, a computational neuroscientist at Harvard University. This “neuromorphic” technology, according to Mike Davies, director of the Neuromorphic Computing Lab at Intel, won’t necessarily replace traditional computers or AI models, but instead, the future will likely see various systems coexisting.

Imitating brains isn’t a new concept. In the 1950s, neurobiologist Frank Rosenblatt developed the perceptron, a simplified model of how brain cells communicate, featuring a single layer of interconnected artificial neurons. Decades later, the perceptron’s design inspired deep learning, which utilizes multiple layers of nested artificial neurons that recognize complex data patterns. However, this approach struggles to match a brain’s ability to adapt to new situations or learn from a single experience. Instead, current AI models often require massive amounts of data and energy, such as those needed to power self-driving cars.
“It’s just bigger, bigger, bigger,” says Subutai Ahmad, chief technology officer of Numenta, a company exploring human brain networks for efficiency. He calls traditional AI models “so brute force and inefficient.” In January, the Trump administration announced Stargate, a plan to allocate $500 billion for new data centers to support energy-intensive AI models. However, a model from the Chinese company DeepSeek is defying this trend, replicating chatbot capabilities with less data and energy. Whether brute force or efficiency will prevail remains uncertain. Meanwhile, neuromorphic computing experts are actively developing brainlike hardware, architectures, and algorithms.
“People are bringing out new concepts and new hardware implementations all the time,” notes computer scientist Catherine Schuman of the University of Tennessee, Knoxville. These advances, while primarily benefiting biological brain research and sensor development, have yet to become mainstream in AI, but this may soon change. Here are some neuromorphic systems that have the potential to upgrade AI.
Making Artificial Neurons More Lifelike
Real neurons are complex living cells. They constantly receive environmental signals, with their electrical charge fluctuating until it reaches the firing threshold. This triggers an electrical impulse transmitted across the cell to neighboring neurons. Neuromorphic computing engineers have replicated this in artificial neurons, which are part of spiking neural networks. These networks create discrete spikes that simulate an actual brain’s signals and carry information through the network. Such a network may be modeled in software or built in hardware. Traditional AI’s deep learning networks do not model spikes. Instead, in those models, each artificial neuron acts as “a little ball with one type of information processing,” as described by Mihai Petrovici, a neuromorphic computing researcher at the University of Bern in Switzerland. Connections called parameters link each of these “little balls.” Typically, every input to the network activates every parameter at once, which is not efficient. DeepSeek divides traditional AI’s deep learning network into smaller sections that can activate separately, which is more efficient. However, real brain and artificial spiking networks achieve efficiency in a slightly different way: Each neuron is not connected to every other one. Also, neurons send information to connections only if electrical signals reach a specific threshold. The network activates sparsely rather than all at once.
Comparing Networks
Typical deep learning networks are dense, with all their identical “neurons” interconnected. Brain networks are sparse, and their neurons can take on different roles. Neuroscientists are still working out how complex brain networks are actually organized.

Importantly, brains and spiking networks combine memory and processing. Petrovici explains that the connections “that represent the memory are also the elements that do the computation.” Mainstream computer hardware, which runs most AI, separates memory and processing. AI processing usually happens in a graphical processing unit (GPU), while storage is handled by a different hardware component like random access memory (RAM). This makes for a simpler computer architecture. But energy is wasted, and computation is slowed down as data shuttles between these components. The BrainScaleS-2 neuromorphic computer chip combines these efficient features. It contains sparsely connected spiking neurons, physically built into the hardware, and the neural connections store memories while performing computations. BrainScaleS-2 was developed as part of the Human Brain Project, a decade-long effort focused on understanding the human brain by modeling it in a computer. Certain researchers then explored how the project’s developed technology could boost AI efficiency.
For example, Petrovici trained different AIs to play the video game “Pong.” A spiking network running on the BrainScaleS-2 hardware used a thousandth of the energy of a simulation of the same network running on a CPU. The real test was to compare the neuromorphic setup with a deep learning network running on a GPU. They found that training the spiking system to recognize handwriting used a hundredth of the energy of the typical system. Scaling up and distributing the spiking neural network hardware is necessary for it to be a real contender in the AI world. Then, as Schuman says, it could be “useful to computation more broadly.”
Connecting Billions of Spiking Neurons
The academic teams working on BrainScaleS-2 currently have no plans to scale up the chip, but some of the world’s biggest tech companies, like Intel and IBM, do. In 2023, IBM introduced its NorthPole neuromorphic chip, which combines memory and processing to save energy. And in 2024, Intel launched Hala Point, “the largest neuromorphic system in the world right now,” according to computer scientist Craig Vineyard of Sandia National Laboratories in New Mexico. Even with that impressive title, Vineyard notes that nothing about the system visually stands out. Hala Point fits in a box the size of luggage. Yet, it contains 1,152 of Intel’s Loihi 2 neuromorphic chips for a record-setting total of 1.15 billion electronic neurons – which equals around the same amount of neurons as in an owl’s brain. Each Loihi 2 chip, like BrainScaleS-2, has a hardware version of a spiking neural network. The physical spiking network also uses sparsity and combines memory and processing. According to Schuman, this neuromorphic computer has “fundamentally different computational characteristics” than a traditional digital machine. This improves Hala Point’s efficiency compared with that of typical computer hardware. Davies says, “The realized efficiency we get is definitely significantly beyond what you can achieve with GPU technology.”
In 2024, Davies and a team of researchers demonstrated that the Loihi 2 hardware saves energy even while running typical deep learning algorithms. They took several audio and video processing tasks and modified their deep learning algorithms so they could run on the new spiking hardware. This process “introduces sparsity in the activity of the network,” Davies says. A deep learning network running on a regular digital computer processes every single frame of audio or video as though it is new. However, spiking hardware maintains “some knowledge of what it saw before,” Davies explains. The system can “keep the network idle as much as possible when nothing interesting is changing” when parts of the audio or video stream stay the same from one frame to the next. On one video task tested by the team, a Loihi 2 chip running a “sparsified” version of a deep learning algorithm used 1/150th the energy of a GPU running the regular version of the algorithm. The audio and video test showed that one type of architecture can do a good job running a deep learning algorithm. However, developers can reconfigure the spiking neural networks within Loihi 2 and BrainScaleS-2 in a variety of methods, coming up with new architectures to use the hardware differently. Researchers are making headway, although it’s not yet clear which algorithms and architectures will best utilize this hardware or provide the greatest energy savings. A January 2025 paper introduced a new method to model neurons in a spiking network, encompassing a spike’s shape and timing. The approach enables an energy-efficient spiking system to use one of the learning techniques that has underpinned mainstream AI’s success.
Neuromorphic hardware could be ideal for algorithms that haven’t even been developed yet. James Aimone, a neuroscientist at Sandia National Labs, says that the fact is “actually the most exciting thing.” He adds that the technology carries a lot of potential and could make the future of computing “energy efficient and more capable.”

Designing an Adaptable ‘Brain’
Neuroscientists agree that a key feature of a living brain is its ability to learn ‘on the go.’ And it doesn’t require a huge brain to do this. C. elegans, one of the first studied animals with completely mapped brains, has just 302 neurons and approximatey 7,000 synapses, enabling the worm to continuously and efficiently learn. Ramin Hasani studied how C. elegans learns, as part of his graduate work in 2017. He was working to model the current scientific knowledge about the worms’ brains in computer software. Rus learned about this research while on a run with Hasani’s advisor at an academic conference. She was then training AI models with hundreds of thousands of artificial neurons equipped with half a million parameters in order to operate self-driving cars.
Rus realized that if a worm doesn’t need a huge network to learn, AI models could make do with a smaller model, too. She invited Hasani and one of his colleagues to join her at MIT. The researchers then worked on a series of projects that would give self-driving cars as well as drones more wormlike “brains”—brains that are small and adaptable. The end result was an AI algorithm the team named a liquid neural network. Rajan, the Harvard neuroscientist, describes this as “a new flavor of AI.” Standard deep learning networks, despite their impressive size, only learn when training is taking place. When training is complete, the network’s parameters cannot be changed.
“The model stays frozen,” Rus explains. Liquid neural networks, as their name suggests, are more fluid. These new networks can shift and change their parameters throughout time, although they incorporate a lot of the same techniques as standard deep learning. Rus states that they “learn and adapt…based on the inputs they see, much like biological systems.”
To design this new algorithm, Hasani and the research team wrote mathematical equations that mimicked how a worm’s neurons activate considering information that shifts throughout time. The resulting equations govern the liquid neural network’s behavior. Such equations are notoriously difficult to solve, but the team found a way to approximate a solution which made it possible to run the network in real time. According to Rajan, this solution is “remarkable.” In 2023, Rus, Hasani, and their colleagues found that liquid neural networks adapted better to new scenarios than their much larger typical AI models. They trained two types of liquid neural networks, along with four types of deep learning networks, to pilot a drone toward various objects in the woods. They put one of the training objects, a red chair, into different environments, including a patio and a lawn beside a building after the training was finished. The smallest liquid network contained just 34 artificial neurons and about 12,000 parameters and outperformed the largest standard AI network they tested, which contained around 250,000 parameters. Around the same time, the team started the company Liquid AI and then partnered with the U.S. military’s Defense Advanced Research Projects Agency to test their model flying an actual aircraft. The company also scaled up its models to compete directly with standard deep learning networks. In January, the company announced LFM-7B, a 7-billion-parameter liquid neural network that generates answers to prompts. The team reports that the network outperforms typical language models of the same size.
“I’m excited about Liquid AI because I believe it could transform the future of AI and computing,” Rus states. This approach may not necessarily use less energy than mainstream AI. Its constant adaptation makes it “computationally intensive,” Rajan says. However, this approach “represents a significant step towards more realistic AI” that can more closely mimic the brain.

Building on Human Brain Structure
While Rus is working off the blueprint of the worm brain, others are taking inspiration from a very specific region of the human brain — the neocortex, which is a wrinkly sheet of tissue that covers the brain’s surface. Rajan says, “The neocortex is the brain’s powerhouse for higher-order thinking.” He continues, “It’s where sensory information, abstract reasoning, and decision-making converge.” This part of the brain contains six layers of cells horizontally and is organized into tens of thousands of vertical structures known as cortical columns. Each column contains about 50,000 to 100,000 neurons, arranged in several hundred vertical minicolumns. In neuroscientist and computer scientist Jeff Hawkins’s view, these minicolumns are the primary drivers of intelligence. Hawkins theorizes that these cells exist in minicolumns, where they are used to track and model all of our sensations and ideas, while in other parts of the brain, grid and place cells assist an animal in sensing its position in space. For example, Hawkins says that as a fingtertip moves, these columns create a touching model. In his 2021 book, A Thousand Brains, Hawkins explains that it’s the same with what we see with our eyes. Rajan says, “It’s a bold idea.” Current neuroscience maintains that intelligence involves the interaction of several different brain systems, and not only these mapping cells, she notes. Though Hawkins’ theory hasn’t reached widespread acceptance in the neuroscience community, “it’s generating a lot of interest,” she says. This includes excitement about its potential uses for neuromorphic computing. In 2005, Hawkins developed his theory at Numenta, a company he co-founded. Announced in 2024, the company’s Thousand Brains Project is a plan for pairing computing architecture with new algorithms. The team described an architecture that included seven cortical columns, hundreds of minicolumns but spanned just three, rather than six, layers in early testing for the project. The team also developed a new AI algorithm, which analyzes input data and makes use of the column structure. Simulations showed that each column could learn to recognize complex objects. The practical effectiveness of this system has yet to be tested. However, the idea is that it will be capable of learning about the world in real time, similar to the algorithms of Liquid AI. For now, the team at Numenta, based in Redwood, Calif., is using regular digital computer hardware to test these ideas. But in the future, custom hardware could implement physical versions of spiking neurons organized into cortical columns, Ahmad says. He continues that using hardware designed for this architecture could make the whole system more efficient and effective.
Schuman says, “How the hardware works is going to influence how your algorithm works.” She adds, “It requires this codesign process.” The right combination of algorithm, architecture, and hardware is the only way a new idea in computing can succeed. For example, DeepSeek’s engineers noted that they achieved their efficiency gains by codesigning “algorithms, frameworks and hardware.” Sara Hooker, a computer scientist at the research lab Cohere in San Francisco and author of an influential 2021 publication, notes that when one of these is not ready or available, a good idea could languish, “The Hardware Lottery.” This previously happened with deep learning: Computer scientists developed the algorithms to do it back in the 1980s. However, the technology was not successful until computer scientists began using GPU hardware to process AI in the early 2010s. In a 2021 video for the Association for Computing Machinery, Hooker said that all too often “success depends on luck.” Researchers, if they spend more time considering new combinations of neuromorphic hardware, architectures, and algorithms can open up new and intriguing possibilities for computing and AI.