The global market for memory and processing semiconductors used in artificial intelligence (AI) applications will soar to $128.9 billion in 2025, three times the $42.8 billion total in 2019, according to IHS Markit | Technology, now a part of Informa Tech.
Within the AI segment, worldwide revenue from memory devices in AI applications will increase to $60.4 billion in 2025, up from $20.6 billion in 2019. The processor segment will expand slightly faster, growing to $68.5 billion in 2025, up from $22.2 billion in 2019.
This total tracks sales of semiconductor content in systems that run AI functions. These chips include memory and processing devices within systems that can run AI applications.
AI chips are used widely in various markets, including automotive, communication, computers, consumer electronics, industrial and healthcare. The largest single market for memory devices in AI applications is the computer segment, with sales rising to $65.9 billion in 2025, increasing at a 15.7 percent compound growth rate (CAGRs) from $27.5 billion in 2019. However, other segments will generate faster growth, including the communication, consumer electronics, industrial and healthcare sectors.
“Semiconductors represent the foundation of the AI supply chain, providing the essential processing and memory capabilities required for every artificial intelligence application on earth,” said Luca De Ambroggi, senior research director for AI at IHS Markit | Technology. “AI is already propelling massive demand growth for microchips. However, the technology also is changing the shape of the chip market, redefining traditional processor architectures and memory interfaces to suit new performance demands.”
AI-driven processor architectures emerge
Several startups now are aiming to offer completely new architectures that will challenge the market supremacy of traditional devices used for AI processing, such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), microprocessors (MPUs), microcontrollers (MCUs), and digital signal processors (DSPs). These new architectures include capabilities such as integrated vector-processing, which can accelerate deep-learning tasks.
Moreover, the introduction of AI-related capabilities into various devices means that these traditional classes of processors are evolving to the point where they are no longer recognizable as distinct categories.
“The old definitions of what makes an MPU, DSP or MCU are beginning to blur in the AI era, as each type of device adds cores with different core functions,” De Ambroggi said. “Increasingly, designers of AI-enabled systems are using highly integrated heterogenous processing solutions, such application-specific integrated circuits (ASICs) and system-on-chip (SoC) solutions. With processor makers offering turnkey, heterogenous processing solutions using these ASICs and SOCs, it makes less difference to system designers whether their AI algorithm is executed on a GPU, CPU or DSP.
Overcoming the AI memory bottleneck
Advanced AI technologies, specifically deep-learning algorithms, require huge amounts of high-bandwidth volatile memory to perform properly. However, increasing the memory bandwidth to the levels needed for AI algorithms also can drive up power consumption to unsustainable levels.
To address this challenge, the semiconductor industry is studying some innovative approaches, including:
- A new processor architecture wherein the memory is closer to the computational core, reducing the burden of data movement and enabling high processing parallelism with dedicated memory cells for each processing core.
- Moving the early stages of data computation into the memory, a technique called processing into memory (PIM). PIM delivers similar benefits as those mentioned above.
- Identifying new memory technologies that can enable new approaches with easy back-end silicon integration, volatile performance, a non-volatile capability, low pico-joule per byte or a new and a fast input/output (I/O) interface.
— Luca De Ambrogg, Markit analyst, IHS Markit, USA.