Neuromorphic Computing: Human Brain-like Function Meets AI

June 13, 2025
TDK showcased its new Spin-memristor chip at CES in January, along with its potential to change the AI semiconductor game

Building technology for the network edge represents a unique challenge, as doing so involves juggling three distinct engineering disciplines simultaneously: the Internet of Things (IoT), artificial intelligence (AI), and security.

The three-part challenge is to create IoT devices for the edge that are highly performant, can be built at reasonable expense, and can consume the bare minimum of resources – all while maximizing them with AI abilities without pesky raised costs or increased resource consumption, and also with security to keep the devices and everything connected to them safe. Easy-peasy.

Silicon has finally been pushed very close to its physical limits, and the electronics industry can’t expect to offset the energy costs associated with advances in processing power with integration for much longer. For that reason and others, achieving lower power consumption in AI processing calls for groundbreaking technologies, the kind that fundamentally overhaul the architecture of computers.

The workhorse processor has always been the central processing unit (CPU); however, AI runs better on graphics processing units (GPU) – even though both are examples of the von Neumann architecture. The essential characteristics of von Neumann machines that are pertinent here are that the system stores both data and programs in memory prior to execution. That represents an inherent drawback. Since the memory and processing units are separate, moving data between them consumes a significant amount of power.

The Advantages of Neuromorphic Computing

Researchers in neuromorphic computing have devised electronic analogs of the neurons, synapses, and message spikes that are found in living brain tissue. This has become a promising alternative to von Neumann machines.

Neuromorphic computing integrates analog memory elements directly into processing units – similar to a human brain – eliminating the need for data transfer. This allows for significantly faster data processing and significantly reduced power consumption compared to the von Neumann model.

Neuromorphic computing integrates analog memory elements directly into processing units – similar to a human brain – eliminating the need for data transfer. This allows for significantly faster data processing and significantly reduced power consumption compared to the von Neumann model.

The human brain can function on roughly 20 watts of power, making profoundly complex decisions using only about 1/10,000th  of the energy consumed by today’s digital AI processing. Neuromorphic computing represents sophisticated processing at exceptionally low power across a variety of devices and systems, including edge IoT devices.

Memristors: Bringing Reliable Memory to Neuromorphic

Neuromorphic devices have been experimented with for decades, with very limited success. One of the main challenges has been engineering reliable memory.

In conventional von Neumann computing, memories and computing units are separated, where data stored in memory is fetched and calculated in binary code. A large amount of data travels back and forth between memories and processing units, which is bottlenecked with respect to data latency, bandwidth, and power consumption (also called the von Neumann bottleneck). This digital computing also requires a large number of transistors and consumes silicon area and operating power.

On the other hand, Neuromorphic computing operates inside memory in an analog manner (reflected by the range of values). This means there is no fetch/store data movement involved, which solves the von Neumann bottleneck.

Computing is executed by electrical circuit law (e.g., Kirchhoff’s laws) in the memory array, and output computing results in an analog process that mimics human brain function. For memory, neuromorphic devices have depended on a type of resistor, called a memristor, whose value fluctuates based on increments of applied voltage and/or applied current.

Conventional memristors have been notoriously difficult to work with, due to their complex response behavior and other issues – notably that stored resistance values drift over time. What would be ideal would be the development of a memristor in which the stored resistance remains stable. Enter the Spin-memristor.

Bringing Neuromorphic Computing to the Edge

TDK has a long history of providing technology for magnetic heads for hard disk drives (HDD) and tunneling magnetoresistance (TMR) sensors. For these applications, the company devised a nanotechnology to detect the rotation – or spin – of electrons. This branch of electromagnetic science is called spintronics.

TDK realized the memristor would be a perfect application for spintronics, and the company collaborated with the French research organization CEA (Alternative Energies and Atomic Energy Commission) to demonstrate its utility in AI applications, creating the Spin-memristor.

As an analog device, the Spin-memristor can hold a range of values, maintains stable resistance values over time, and is expected to offer high immunity to environmental influences. This should make it possible for neuromorphic computers to reliably execute complex AI workloads with ultra-low power consumption.

Edge devices must consume less power, but we want more AI and sensing in edge products. The Spin-memristor will make it possible to put intelligence at the edge without consuming so much power.

AI is already being used to detect possible hacking attempts in compute centers and in networks, alerting operators when odd usage patterns are detected and when data might be suspect. The edge currently does not have the power to support AI-enabled security, but if we can realize edge AI, we can actively look for, for example, eavesdropping on data transmissions. AI will also be better able to detect tampering and hacking of edge devices themselves.  

Building technology for the network edge represents a unique challenge, as doing so involves juggling three distinct engineering disciplines simultaneously: the Internet of Things (IoT), artificial intelligence (AI), and security.

The three-part challenge is to create IoT devices for the edge that are highly performant, can be built at reasonable expense, and can consume the bare minimum of resources – all while maximizing them with AI abilities without pesky raised costs or increased resource consumption, and also with security to keep the devices and everything connected to them safe. Easy-peasy.

Accomplishing this monumental task likely requires the use of neuromorphic computing at the edge. Neuromorphic computing can satisfy the requirements of edge AI to be both extremely powerful from a computational point of view and ultra-low-power from an energy perspective. That will make it easier to add AI functionality at the edge at a fairly low cost.

In late 2024, Japanese company TDK unveiled its Spin-memristor chip with the aim of making neuromorphic computing practical. Let’s take a deeper dive:

AI, Processing Power, and Energy

The release of ChatGPT in 2022 prompted the introduction of a parade of generative AI tools, and the demand for AI subsequently soared and has yet to tail off. That demand can be satisfied only by adding more processing power in more places – from data centers all the way to the network edge.

Adding processing power is usually accompanied by an increase in the amount of energy consumed – a problem compounded by having to build cooling systems to deal with dissipated heat, which often consume even more energy. In data centers, the huge increase in energy consumption is a burden that needs to be minimized. At the edge in IoT sensors, it might be acceptable to have some increase in energy consumption; however, a huge increase is intolerable, and a source of additional energy may not even be available. It is not viable to simply gather data at the edge, ship it off to a data center, perform AI workloads there, and then transmit the results back to the edge, because that would require power for both AI processing and data transmission as well.

In an environment where every microwatt counts, there is tremendous pressure to devise AI systems that consume the absolute minimum of energy possible.

Through the history of semiconductor processors, each new generation was faster and more powerful than the previous one. That ordinarily would have required much more energy, but each new generation of processors also took a successive step in transistor integration, which mitigated the increase in power consumption.

Silicon has finally been pushed very close to its physical limits, and the electronics industry can’t expect to offset the energy costs associated with advances in processing power with integration for much longer. For that reason and others, achieving lower power consumption in AI processing calls for groundbreaking technologies, the kind that fundamentally overhaul the architecture of computers.

The workhorse processor has always been the central processing unit (CPU); however, AI runs better on graphics processing units (GPU) – even though both are examples of the von Neumann architecture. The essential characteristics of von Neumann machines that are pertinent here are that the system stores both data and programs in memory prior to execution. That represents an inherent drawback. Since the memory and processing units are separate, moving data between them consumes a significant amount of power.

The Advantages of Neuromorphic Computing

Researchers in neuromorphic computing have devised electronic analogs of the neurons, synapses, and message spikes that are found in living brain tissue. This has become a promising alternative to von Neumann machines.

Neuromorphic computing integrates analog memory elements directly into processing units – similar to a human brain – eliminating the need for data transfer. This allows for significantly faster data processing and significantly reduced power consumption compared to the von Neumann model.

The human brain can function on roughly 20 watts of power, making profoundly complex decisions using only about 1/10,000th  of the energy consumed by today’s digital AI processing. Neuromorphic computing represents sophisticated processing at exceptionally low power across a variety of devices and systems, including edge IoT devices.

Memristors: Bringing Reliable Memory to Neuromorphic

Neuromorphic devices have been experimented with for decades, with very limited success. One of the main challenges has been engineering reliable memory.

In conventional von Neumann computing, memories and computing units are separated, where data stored in memory is fetched and calculated in binary code. A large amount of data travels back and forth between memories and processing units, which is bottlenecked with respect to data latency, bandwidth, and power consumption (also called the von Neumann bottleneck). This digital computing also requires a large number of transistors and consumes silicon area and operating power.

On the other hand, Neuromorphic computing operates inside memory in an analog manner (reflected by the range of values). This means there is no fetch/store data movement involved, which solves the von Neumann bottleneck.

Computing is executed by electrical circuit law (e.g., Kirchhoff’s laws) in the memory array, and output computing results in an analog process that mimics human brain function. For memory, neuromorphic devices have depended on a type of resistor, called a memristor, whose value fluctuates based on increments of applied voltage and/or applied current.

Conventional memristors have been notoriously difficult to work with, due to their complex response behavior and other issues – notably that stored resistance values drift over time. What would be ideal would be the development of a memristor in which the stored resistance remains stable. Enter the Spin-memristor.

Bringing Neuromorphic Computing to the Edge

TDK has a long history of providing technology for magnetic heads for hard disk drives (HDD) and tunneling magnetoresistance (TMR) sensors. For these applications, the company devised a nanotechnology to detect the rotation – or spin – of electrons. This branch of electromagnetic science is called spintronics.

TDK realized the memristor would be a perfect application for spintronics, and the company collaborated with the French research organization CEA (Alternative Energies and Atomic Energy Commission) to demonstrate its utility in AI applications, creating the Spin-memristor.

As an analog device, the Spin-memristor can hold a range of values, maintains stable resistance values over time, and is expected to offer high immunity to environmental influences. This should make it possible for neuromorphic computers to reliably execute complex AI workloads with ultra-low power consumption.

Edge devices must consume less power, but we want more AI and sensing in edge products. The Spin-memristor will make it possible to put intelligence at the edge without consuming so much power.

AI is already being used to detect possible hacking attempts in compute centers and in networks, alerting operators when odd usage patterns are detected and when data might be suspect. The edge currently does not have the power to support AI-enabled security, but if we can realize edge AI, we can actively look for, for example, eavesdropping on data transmissions. AI will also be better able to detect tampering and hacking of edge devices themselves.  

About the Author

Tomoyuki Sasaki, Ph.D.

Tomoyuki Sasaki, Ph.D., is Section Head and Senior Manager of TDK's Advanced Products Development Center. Lern more at https://product.tdk.com/en/index.html.