/dq/media/media_files/2025/08/29/neuromorphic-computing-2025-08-29-15-33-55.png)
Neuromorphic Computing
The first time I witnessed a child stack blocks, I felt like I was watching a miniature computer learn. She didn't plan the tower, she felt for balance - nudged a cube this way, nudged a cube that way - and when a cube wobbled she adjusted, and all that sensing, deciding and acting folded so seamlessly together it felt like magic.
I believe engineers are trying to bottle that same instinctive interplay of sensing, memory and action in their neuromorphic computing. That is, chips and systems that don't just run algorithms but behave somewhat like a brain.
"This is not science fiction."
All over quiet labs and buzzing clean rooms, teams are moving away from the ancient von Neumann architecture - separate memory and processor units with bits flying back and forth, and are designing hardware that routes information like neurons and synapses.
There are systems, for example, that are event driven, sparse in terms of activity, and very power efficient for certain applications. Companies and research groups have created silicon simulating millions, sometimes billions, of "neurons" and synapse, and are beginning to demonstrate where brain inspiring designs could outperform classic chips.
To understand what that means, compare a kitchen light to a motion sensor light. Classic processors are kitchen lights: they always burn energy while waiting. Neuromorphic chips are motion sensor lights: they only wake up when something relevant occurs. This mimics how the brain works - most neurons exist quietly until activated - which allows for remarkable efficiencies needed for the messy real-world problems of perception, reasoning, and quick-adapting.
Early Neuromorphic Computing Explorers: Chips That Think Differently
The story began more than a decade ago when there were bold prototypes. IBM's TrueNorth, which was developed under DARPA's SyNAPSE project, was a radically different architecture—it had over a million digital neurons and hundreds of millions of synapses on a chip with a power budget in milliwatts. It was not built to race on standard benchmark tasks, it was built to demonstrate that a non-von Neumann, massively parallel, event-based design could execute sensory tasks with dramatically less energy than a relevant conventional architecture.
Across the ocean, researchers at the University of Manchester constructed SpiNNaker — a machine that appears to have more than a million ARM cores attached in order to model spiking neural networks in relevant brain sizes. SpiNNaker was more than just a hardware platform for benchmarks; it offered a scientific microscope, which enabled neuroscientists and computer scientists to run biological experiments at a large, brain-size scale, in real-time, and ask questions about how brains compute.
Then came the commercial momentum. Intel's Loihi family revisions came swiftly; Loihi 2 introduced a more flexible neuron model, accelerated on-chip learning, and programming frameworks to support the developer journey.
In April 2024 Intel unveiled a neuromorphic system with a multitude of more than a thousand Loihi 2 chips, a machine capable of expanding research well beyond microchip demos for a single experiment; the company has argued that such systems will perform some inference and optimization tasks with orders of magnitude less energy than conventional CPUs and GPUs.
Why Spikes Matter in Neuromorphic Computing
Spiking neural networks (SNN) are at the heart of many neuromorphic systems.
In contrast to the deep learning world with its dense floating point matrices, which require processing complex samples before arriving at a clear output, the SNN only sends discrete events — spikes — when a neuron crosses a threshold. Sparsity changes how to think about energy usages, so all processing occurs when there is an input, and the hardware remains idle when there is no input.
There is also a conceptual convenience. Spiking dynamics preserves time well: the exact timing between spikes means information may be encoded there. This makes SNNs and neuromorphic hardware excellent at temporally-oriented tasks, like hearing, touch, and the various streams of events that sensors in the real-world produce. Engineers are not just relocating historical networks onto unfamiliar lessons learned from silicon: they are also reframing algorithms to embrace timing, sparsity and locality - all aspects that biology has been optimizing for millions of years.
The materials revolution: memristors and analog hope
The architectural revolution is being matched with a materials-level revolution. Researchers are playing with memristors - resistive devices whose conductance changes according to a voltage history - to try to replicate synaptic plasticity with hardware. Memristors are capable of combining memory and computation in the same physical entity, meaning that distance between storing information and processing it is effectively curtailed, while providing potentially huge gains in density and energy consumption.
Recent experimental work is showing memristors that deserialize fast and reliably enough to operate as artificial synapses and even artificial neurons. They are not CMODE drop-in substitutes just yet, but might suggest an analogue future where stateful devices hold most of the computation - comparable to chemical memories doing ordinary thinking with the scale of a volumetric reduction.
While neuromorphic chips might seem like a panacea, there is a necessary realism to re-enter the conversation. For large matrix-heavy workloads, for example training normal deep learning models, GPUs and TPUs are still around. The neuromorphic advantage is in a narrow set of cases (always-on sensing, ultra-low-power edge inference, quick adaptation to changing inputs, and workloads where latency and energy budgets are tight).
For example, Intel has talked about sensor-fusion and optimization problems where Loihi-based systems can perform tasks tens or hundreds of times more energy efficiently than standard architectures. But even with the advantages of neuromorphic hardware, it is not a flip of the switch. The programming models are different and they must think differently as engineers about algorithms, using spikes, local learning rules, and events-based sensors.
Furthermore, GPUs still have superior throughput and vastly more mature software metabolism over many standard benchmarks and high-performance number-crunching tasks. The narrative is not "neuromorphic beats everything," it is "neuromorphic opens up new trade-offs in previously intractable niches."
Real-World Applications of Neuromorphic Computing: tiny devices, big impact
Imagine a wearable health monitor that listens for the first whisper of an irregular heartbeat, and runs a lightweight detection model on-device, only alerting a phone when the event is real. Or imagine a swarm of agricultural robots that run event-camera streams locally and only act and communicate if an anomaly is detected, therefore greatly reducing overall bandwidth use and holding data private. Neuromorphic chips are perfect for always-on, low-power inference tasks.
Startups and spinouts are working all towards just these situations. Some focus on microscopic, battery-sipping chips for always-on sensors; some focus on hybrid systems, where neuromorphic co-processors can work with traditional CPUs, where the CPU can do what it does best and the perceptive co-processor can do what it does best.
Each approach promises not only the reduction in energy consumption, but the better allocation of attention: sensors, processors and networks that turn on only when something of worth needs processing. One of our more poetic inspirations comes from the brain’s graceful degradation.
A human lesson: resilience and adaptation
Biological neural networks are noisy and lossy, and still they tolerate input loss while retaining the ability to function at some level. Neuromorphic designs will try to achieve this kind of robustness: local learning rules, sparse connectivity and distribution could yield systems that learn online in real time, recalibrating themselves to adapt to novel contexts as opposed to having to relearn from scratch.
This is essential for robotics and real-world agents that need to deal with changing environmental conditions. A robot using neuromorphic perception could continue to operate when dust, glare, and sensor deterioration relevant to inputs occur — it could reweight, reroute and relearn at the edge. The value is not that one has brittle performance under a pristine laboratory benchmark, but robust performance under messy realworld conditions.
The software and community puzzle
Hardware without software is an exquisitely-carved engine with no driver. The neuromorphic community understands this well: companies and labs are building frameworks and SDKs to make spiking networks approachable, and open projects are standardising primitives so models become transferable across platforms.
The success of neuromorphic technology being adopted is a function of tooling. If progress is made in tooling, then neuromorphic technology will move from specialised labs to application developers.
This is also a story of the community. The need to translate concepts across scientific disciplines from neuroscience, materials science, electrical engineering and computer science is important. The conversation between brain scientists and silicon engineers already is ramping up - that's the cross-polination that lifts smart prototypes into useful tools.
A future that is hybrid, humane and local
If we visualize how things might be in a decade from now into the future, it looks like a hybrid: gigantic clouds with enormous GPUs still training trillion-parameter models, while our small, laptops, watches, and doorbells have tiny, neuromorphic cores doing the inconspicuous, unsung, energy-sensitive tasks of sensing, filtering and acting.
These local actors will augment (not supplant) large models; they will protect our private lives by computing raw signals on-device, they will almost eliminate latencies for tasks requiring real-time responses, and they will create new applications that today's energy hungry processors won't abide.
But here’s a deeper point: by modelling aspects of how biological brains do computations — sparsity, locality, event-driven — we might create systems that are not only more efficient but also focused when trying to fill our troublesome human context. Neuromorphic computing pushes engineers to imagine machines that attend like us, adapt like us, and save resources like an ecosystem.
Bringing it back to the child with the blocks
The child's tower was never a benchmark — it demonstrated qualities neuromorphic engineers aspire to: sensing, local correction, and economies of action.
When the blocks teetered the child didn’t re-run the complete universe; they nudged one block and the tower found a new equilibrium.
The brain —and the neuromorphic chips we inspired by it — suggest no less: don’t compute everything every time. Compute what matters when it matters.
Neuromorphic computing is not the end of a story, but rather a new type of plot twist in the continuing story of AI. It asks us to focus less on raw speed, and more on where intelligence should reside, on how it should act and how it can coexist gracefully within the constraints of power, latency and privacy. That is an engineering ethic that has the patience and poise of a child stacking blocks — small steady advancements into something solid.
(The selected readings and key projects mentioned in this article: Intel’s Loihi research and Loihi 2 system announcement and technology brief; IBM’s TrueNorth neurosynaptic processor; the University of Manchester’s SpiNNaker platform; and experimental memristor research showing device-level synaptic behaviour.)