Nvidia has dominated the AI landscape since ChatGPT’s launch broke the internet, kicking off a tidal wave of research into AI apps, including chatbots and AI agents.
Its graphic processing units, or GPUs, are ideally suited for handling the heavy workloads associated with training and running AI apps. Their ability to process data faster than traditional CPUs found in data centers has unleashed a massive data center upgrade cycle, resulting in hundreds of billions of dollars in sales for Nvidia.
Nvidia’s moat, however, may be shrinking as other chipmakers refine their own semiconductors to suit AI requirements better. Among the semiconductor stocks furthest along in that journey is Marvell Technology (MRVL). The company, founded in 1995, made a major pivot toward data center infrastructure in 2016 under the leadership of CEO Matt Murphy, a move that positioned it perfectly to capture the growth in AI spending.
Marvell’s expertise in developing application-specific integrated circuits, or ASICs, that efficiently perform specific, routine workloads provided it with the tools needed to partner with companies like Amazon to build custom AI chips, called XPUs, as hyperscalers searched for Nvidia alternatives to diversify their supply chains and reduce costs.
These partnerships have provided a nice jolt to Marvell’s sales and profit growth, and following presentations at Amazon AWS’s recent re:Invent conference, Marvell is on the cusp of a major step up in demand as demand for XPUs and interconnect products used to tie networks together climbs.
Marvell carves lucrative niche against Nvidia
Nvidia’s chips remain the go-to choice for hyperscalers and data centers because they’re faster, general-purpose solutions highly optimized for AI tasks, thanks to Nvidia’s CUDA software. The company controls over 80% of the AI chip market, and players like Marvell are unlikely to displace its dominance.
Marvell Technology is experiencing growth as Amazon deploys more custom silicon and data centers need for interconnects increase.
Shutterstock.
That said, Marvell is likely to carve away billions of dollars in revenue that would otherwise have gone to Nvidia as Amazon continues to invest heavily in developing its Trainium chip lineup.
“Effective Tuesday, Amazon launched its Trainium3 chip, which is part of the revenue ramp called out by Marvell back in August, and that program, along with others, should help drive Marvell’s custom AI silicon business higher over the coming quarters,” said long-time portfolio manager Chris Versace in a post on TheStreet Pro.
Amazon Trainium chips offer advantages over GPUs:
- Cheaper and more efficient: Trainium chips are custom-built to train machine learning models, such as AI’s large language models. Amazon claims that its use can reduce training costs by 50% compared to GPU-based systems. The chips are specifically designed for Amazon’s entire data center infrastructure, ensuring maximum optimization compared to off-the-shelf GPU solutions.
- Diversification: Concentrating data centers around GPUs exposes Amazon to supply chain risks and limits negotiating power, increasing Amazon’s costs even as they’re already surging.
- Creates additional revenue streams/stickiness: Trainium chips are proprietary to AWS, suggesting that enterprises’ use of them deepens relationships and increases customer switching costs, all the while providing AWS with a new revenue stream tied to usage.
Growing deployment of Trainium chips within Amazon’s AWS is already supporting revenue growth at Marvell’s data center business. In the third quarter, data center sales, primarily driven by AI products including XPUs and interconnects, totaled $1.52 billion, representing a 38% increase from the same period last year and accounting for the lion’s share of Marvell’s total revenue of $2.07 billion.
Custom XPU sales were $418 million in the quarter, up 83% year over year.
Amazon’s newest Trainium3 chip is even more powerful and efficient. Its Trainium3 UltraServers are up to four times more energy efficient and possess four times the memory of its Trainium2 UltraServers.
“We are guiding for robust growth in the fourth quarter and are on track for a strong finish to the fiscal year, with full-year revenue growth forecasted to exceed 40%. Looking ahead, we see demand for our products continuing to accelerate, and as a result, our data center revenue growth forecast for next year is now higher than prior expectations,” said Matt Murphy, Marvell’s Chairman and CEO.
What’s next for Marvell Technology?
CEO Murphy provided solid guidance for the current quarter, stating that revenue is expected to be around $2.2 billion, up from $1.8 billion in the same period last year.
Next year could be even better, given the significant investment Amazon is making in building additional data center capacity for its customers, including Anthropic, whose AI chatbot, Claude, is among the most popular.
Amazon has invested about $8 billion in Anthropic to fuel its growth, and unsurprisingly, Anthropic has committed to using Trainium chips to train its models.
“Trainium2, it’s really doing well. It’s fully subscribed on Trainium2. We have — it’s a multibillion-dollar business at this point. It grew 150% quarter-over-quarter in revenue,” said Jassy on the earnings call. “We have a lot of demand for Trainium.”
Amazon’s capital expenditures surged to $125 billion this year due to its significant investments in AI, including Trainium. In Q3, it spent $34 billion, about $10 billion more than it spent in Q1, 2025. Last year, its capex was $83 billion.
That spending isn’t expected to slow, given Amazon CFO Brian Olsavsky said on its third quarter earnings call, “We expect that amount will increase in 2026.”
The spending will help AWS deliver on its latest plan to build “AI Factories” that can be deployed onsite for non-cloud enterprise and government use. Those factories will include Trainium chips and Nvidia GPUs. They’ll also need plenty of interconnect products, like switches, active electrical cables, transceivers, and amplifiers, further supporting Marvell’s sales growth, given that interconnect represents about half of Marvell’s data center sales.
Further out, Marvell Technology expects to begin producing XPUs for a second hyperscaler client, with meaningful revenue expected to be generated in the next couple of years.
In a research note shared with TheStreet, Morgan Stanley analysts said that Marvell’s guidance includes custom silicon growth of 20% in 2026 and 100% in 2027.
“We expect custom growth next fiscal year to be higher in the second half and do not expect any air pockets in custom revenue,” said Murphy. “We expect accelerated growth over the next several years. fueled by our growing portfolio of design wins.”
Morgan Stanley upped its Marvell stock price target to $112 from $86. Meanwhile, Chris Versace’s price target increased to $140 from $125.
Related: Goldman Sachs issues Micron prediction ahead of earnings
