AI-silicon: On the nose or a nose job?

The new AI wars in technology industry are happening on wafer-thin ice, oops, silicon. Here’s why and how savagely!

New Update


The new AI wars in technology industry are happening on wafer-thin ice, oops, silicon. Here’s why and how savagely!


Everything except the kitchen sink. That seems to be the mood in Big Tech chateau as the players pack and move from their penthouses towards the new and emerging AI estate. Almost every big name now has a pony in the custom-silicon ring, playing a bigger and deeper AI game. If you thought these are show-ponies, you should brace up for the new workhorse-race that’s brewing a fierce dust-storm.

AI: With An Extra Side of Chips

At AWS re:Invent, AWS announced the next generation of its two AWS-designed chip families—AWS Graviton4 and AWS Trainium2—promising better price performance and energy efficiency for customer workloads, in Machine Learning (ML) training and Generative Artificial Intelligence applications. Note that Trainium2 was described here as something designed to deliver up to 4x faster training than first generation Trainium chips – something strong enough to train foundation models (FMs) and large language models (LLMs) in a fraction of the time, while improving energy efficiency up to 2x.



AMD ROCm 6.0 is a full production software stack that supports TensorFlow v1.12 and PyTorch 1.0. These popular frameworks allow developers to leverage the power of AMD GPUs for their AI and Machine Learning workloads. They can develop, collaborate, test, and deploy applications in a free, open-source, and integrated software ecosystem. - Mark Papermaster, CTO, AMD

Then there is Google, with its fifth-generation tensor processing unit (TPU) and TPU v5e- which it said is designed to train large models. Not to forget, the big patent lawsuit over TPU designs which is happening on the sidelines. And there is Microsoft that unboxed its two custom-designed chips and integrated systems. It began this path with the Microsoft Azure Maia AI Accelerator, optimised for AI tasks and generative AI. It also has the Microsoft Azure Cobalt CPU, an Arm-based processor tailored to run general purpose compute workloads on the Microsoft Cloud. Well, if these three are here, no wonder, more will follow suit.



But even as software biggies are packing their trucks towards AI with so much force, imagine what’s happening in the township of true-blue hardware players.

As fresh as in January, chip maker Intel talked about establishing an independent company dedicated to GenAI - Articul8 AI. Intel said the platform is now ready for use by enterprise customers in financial services, aerospace, semiconductors, and telecommunications.


Of course, Nvidia Corp. (which is anyway seated on the top seat in the GPU game – the one with real AI chops) keeps pulling out new cards too. The latest ones being—AI PCs—three new desktop graphics chips with extra components for helping better use of AI on personal machines. You get the drift? AI is no longer only about AI- it has become a processor game. As much as it was about algorithms, data, models, and engines; it is about who can flex a better silicon. The effect is already showing its wrinkles in the Semicon market.

Chips Off The Old Block—No More

If we look at the worldwide semiconductor revenue in 2023, as reckoned by Gartner’s preliminary results, they stand at US$533 billion, marking a dip of 11.1 per cent from 2022. Sharing this with media, Alan Priestley VP Analyst at Gartner had pointed out that while the cyclicality in the semiconductor industry was present again in 2023, the market suffered a difficult year with memory revenue recording one of its worst declines in history. Yes, revenue for memory products dropped 37 per cent in 2023, the biggest drop of all the segments in the semiconductor market. And even as that was happening, most non-memory vendors enjoyed a relatively benign pricing environment in 2023. The demand for non-memory semiconductors for AI applications turned out to be the strongest growth driver. Nvidia’s 2023 semiconductor revenue propelled it into the top five for the first time ever. Thanks to its leading position in the AI silicon market.



Assuming that Nvidia is making a 50 per cent margin, and that these GPUs sell for around US$10,000 each, the cloud makers would need to produce and use around one hundred thousand proprietary processors simply to break even. - Jim Handy, General Director, Objective Analysis

So it makes sense for BigTech to turn their racing horses into zebras—doing both this and that? If so, is it as easy as just picking a paint container?


Veteran semiconductor and SSD analyst Jim Handy from Objective Analysis, Semiconductor Market Research dissects these recent strides being made in the area of custom-silicon (for AI and cloud) by AWS, MS, Google etc.

The biggest point that Handy spots here is that these companies spend so much money on computing that it makes financial sense for them to develop custom processors. “I would guess that it costs as much as a half-a-billion dollars to develop something equivalent to Nvidia’s high-end GPUs. Assuming that Nvidia is making a 50 per cent margin, and that these GPUs sell for around US$10,000 each, the cloud makers would need to produce and use around one hundred thousand proprietary processors simply to break even.”

Krishna Rangasayee, CEO and Founder, opines that in the realm of edge AI and ML, purpose-built System-on-Chips (SOCs) stand as a transformative force, distinct and potent compared to the conventional array of processors or silicon repurposed for AI. “At, our journey began with a profound understanding that cloud-centric solutions were ill-suited for the Edge. Hence, we undertook the ambitious task of crafting a holistic system, both hardware and software, tailored from the ground up to outperform cloud alternatives while respecting the power constraints of the Edge. Our software, embodied in the Palette framework and powered by the SiMa MLSoC, redefines the edge ML experience.”


Cereals Over Milk or Milk Over Cereals?

Great! Except that making hardware, somehow, swings the pendulum, again, back to software.

As sassy and shiny as it looks to jump into the hardware pool, it will not be easy to swim here without the right gear. Which for silicon, means both- the right programming stack as well as a strong manufacturing and supply-chain footprint. Are these two strengths crucial? Are they tricky to get? Specially when Nvidia has a huge head-start here with its GPUs, fabs and CUDA stack?

Let’s talk about hardware-supporting software first.

Krishna-Rangasayee introduces a revolutionary software-centric silicon approach. Our purpose-built technology offers machines and devices the trifecta of speed, efficiency, and real-world reasoning, all packaged in a compact physical form factor tailored for edge environments. - Krishna Rangasayee, CEO and Founder,

Will hardware like MAIA, Tensor, Graviton, TPUs etc. be different from general purpose silicon and GPUs that were handling AI and Cloud workloads so far? Mark Papermaster, CTO, AMD weighs in this question by highlighting the diversity of AI requirements. “One would need the right computing solution that addresses the specific need. For stable algorithms, custom silicon could be apt to run some workloads more economically. When the need for more programmability kicks in, CPUs, GPUs, and FPGAs can come in.”

When we start to compare Nvidia’s CUDA with AMD’s ROCm in this context, Mark Papermaster, CTO, AMD explains, “AMD has been working for years on a competitive AI software stack named ROCm, and it reached full production level and ramped up industry adoption in 2023. ROCm and CUDA are platforms used for parallel programming and accelerating GPU computing performance. While both platforms provide similar functionalities, there is one significant difference. CUDA is exclusive to Nvidia GPUs, while ROCm is open-source and industry standard. It enables the community to drive advancements and does not lock in customers to a specific platform.”

Getting the software right for custom-silicon can be tough but not impossible. Handy argues that Hyperscale cloud computing companies have been using internally-developed AI software with standard servers for much longer than they have been using GPUs. “Their software is very highly developed, and they shouldn’t have much trouble converting it to a proprietary processor, especially if that processor has been designed to complement their software.”

In a press statement, Rani Borkar, corporate vice president for Azure Hardware Systems and Infrastructure (AHSI) had hinted at the new rules of the AI game—”Software is our core strength, but frankly, we are a systems company. At Microsoft we are co-designing and optimizing hardware and software together so that one plus one is greater than two,” Borkar said. “We have visibility into the entire stack, and silicon is just one of the ingredients.”

Incidentally Moore Threads, a player in GPUs has come up with its MUSIFY tool for easy migration of CUDA code (Nvidia’s flagship development environment for GPU-centric apps) to the MTT S4000.

Rangasayee contends that recognising that software is the true differentiator in the Edge ML race, we’ve championed a pushbutton approach, making ML accessible to all. “Silicon Valley’s chip-centric mindset often overlooks the significance of software, but we’ve embraced the challenge.”


Customers no longer want to buy proprietary solutions that give companies like nVidia endless amounts of gross margin when the customer gets a solution that doesn’t ideally suit their needs. - Keith Witek, Chief Operating Officer - Tenstorrent Inc.


Ok, all that explains the ‘software’ question to some extent. What about supporting manufacturing strengths?

At the latest Re:Invent, we asked David Brown, vice president of compute and networking at AWS how much would factors like fabs and supply chain have an advantage for traditionally-strong semicon players vis a vis Big Tech’s forays in silicon. He exuded a lot of confidence and certainty of the path taken by AWS. “I don’t think we are behind anyone. It is just a question of how we innovate in a different way to bring value to customers. We don’t think supply-chain as a weak area. We shipped millions of Nitro chips and a lot of Graviton chips in the last two years. So we have an enormous strength in supply chains. The adoption of Graviton has been significant.”

Of course, with Nvidia, Microsoft’s MAIA and Google’s Tensor, AWS has enough company to keep it busy in this new race. “We see competition in the market as a good thing. We want to offer all processors for all our customer needs. It’s about the customer’s goals, always.”

As AI Mixes Up Heels and Soles …

The ‘he’ of hardware and ‘she’ of software are no longer pronouns with rigid boxes. It’s a fluid world now. And a lot will change, flip and be reset in this new world-order. Those who make software are now making hardware for themselves – and then selling it outside too. And those who make hardware could do the same for filling AI gaps. The software-maker is able, and eager, to serve you all the underlying hardware too. And vice versa. It explodes a new array of choices as well as doubts for the enterprise customer. Who to trust? Who to continue? Who to try? Who to leave? Who and who to mix?

As to whether Moore’s Law will continue to affect the growth and square-footage of processors as we float towards AI, Brown said, “One of the reasons we did Graviton is the new scenario. Everyone had accepted that Moore’s law is going to be dead. I don’t know whether it split the industry or slowed it down. But it was an amazing prediction, it drove the industry. The laws of Physics were slowing the industry down. That’s why we started thinking of custom silicon. With Graviton the performance and efficiency numbers are very impressive. The law may exist. The industry is turning towards silicon innovations.”

Deepa Param Singhal, Vice President, Cloud Applications, Oracle India observes, “While 2023 witnessed exponential growth of modern technologies such as AI & ML with a strong foundation of cloud, we expect the adoption of AI, including Conversational AI and Generative AI, cloud-based applications, and ML to continue to dominate the applications landscape in 2024. They are playing a major role in revolutionizing industries such as healthcare, public sector, and BFSI, in India.”

Who will define real winners in the processor industry ahead? Keith Witek, Chief Operating Officer - Tenstorrent Inc. underlines without mincing any words that the winner will be the one that engineers their business to meet the needs of their customers with solid performance, flexibility, cost-effectiveness, differentiation when needed, security of investment, and reliability. “The companies that win will need to have the technology that is powerful enough and efficient enough to win, but customers are becoming more sophisticated and no longer want to buy proprietary solutions that give companies like Nvidia endless amounts of gross margin when the customer gets a solution that doesn’t ideally suit their needs and can be manipulated or cut off from their supply chain on a moment’s notice.”

Take a few minutes to chew this big shift with these reflections from Handy’s lens, “I moved to Silicon Valley in the 1970s because of the fast-changing technology that was being developed here. Technology is still changing fast, but it’s now on a global level. I am constantly amazed at the brilliant new ideas that are brought to today’s problems. Once today’s problems are resolved, new problems are discovered and solutions are found for those. It’s simply exciting to be involved with all of this, and I won’t try to guess where it’s headed but will continue to enjoy the entire phenomenon. (I wish world peace could be solved as quickly and as well as technology problems seem to be!)”

Well said. But while we try to do that, there’s no denying that AI silicon can truly mark a huge shift for Tech palaces. And a whole new space for bringing in the furniture of breakthroughs that empower customers and assure humanity of AI that brings no carpet stains (at least not the ones red in color). We need that. And more of that.

For now, with silicon being pulled out and fitted into these new AI houses, it looks like – It’s everything ‘and’ the kitchen sink. Preferably, with a dishwasher.

 By Pratima H