Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Meta Platforms Just Unveiled Its New AI Chips. Should Nvidia Investors Be Worried?
Nvidia (NVDA 1.56%) is known as the king of artificial intelligence (AI), but as the industry migrates from training large language models to inference, will its competitive moat hold up?
Over the past week, even more competitive pressures emerged, with a big custom chip announcement from Meta Platforms (META 3.83%) and its chipmaking partner Broadcom (AVGO 4.11%).
As more large customers migrate to custom XPU solutions, should Nvidia investors be worried about the competition?
Expand
NASDAQ: META
Meta Platforms
Today’s Change
(-3.83%) $-24.47
Current Price
$613.71
Key Data Points
Market Cap
$1.6T
Day’s Range
$609.55 - $629.17
52wk Range
$479.80 - $796.25
Volume
19M
Avg Vol
15M
Gross Margin
82.00%
Dividend Yield
0.34%
Meta launches four new chips
On Wednesday, Meta unveiled four new artificial intelligence chips: The MTIA 300, MTIA 400, MTIA 450, and the MTIA 500.
The 300 is optimized for Meta’s core ranking & recommendation (R&R) workloads, which were Meta’s dominant workload before generative AI. The 400, 450, and 500 are each for different types of inference workloads. The 400 can implement larger generative AI models for traditional R&R applications. The 450 then augments the 400’s capabilities by doubling the high-bandwidth memory (HBM) capacity, and the 500 takes the 450’s HBM higher by another 50% on top of that.
Meta disclosed that while the 300 is in use now, the 400, 450, and 500 will be rolled out beginning in early 2027 for generative AI inference. Meta also elaborated on its chip design strategy, noting it uses a “modular” approach that enables it to iterate on new chip designs every six months, rather than the typical two-year cadence. Meta believes this is a necessity, given the rapid pace of AI evolution today:
Like other major cloud companies, Meta is using Broadcom to manufacture and package parts of its chips.
Broadcom says XPUs are on the rise versus GPUs
Broadcom counts Meta as one of its five major XPU customers, to which it supplies SerDes components that connect the chip logic to the networking fabric. At the same time, Broadcom also handles packaging and other elements, ensuring these self-designed chips are manufacturable.
Broadcom held its quarterly earnings call last week, during which CEO Hock Tan elaborated on the current trend toward XPUs over graphics processing units (GPUs), noting that as AI workloads evolve, chips require greater specialization for each step in the AI training and inference processes:
Image source: Getty Images.
Should Nvidia investors be worried?
As the AI computing industry evolves toward pre-training, post-training, reinforcement learning, and inference for diverse applications, is Nvidia in danger of losing market share? After all, Nvidia did shell out $20 billion for the intellectual property and engineering talent of inference chip start-up Groq late last year. That may indicate that Nvidia sees emerging demand for non-GPU chips as the industry pivots, as Hock Tan described.
That being said, Nvidia investors shouldn’t necessarily panic. Even as the inference market is becoming more competitive, Nvidia still has a strong lead in training, and investment in training infrastructure will continue to grow.
Look no further than Meta itself for evidence. Despite unveiling new chips last week, Meta inked a massive, multiyear deal with Nvidia last month to deploy literally millions of Nvidia Blackwell and Rubin chips in its data centers, along with Nvidia central processing units (CPUs), all connected via Nvidia’s SpectrumX ethernet switches.
So, even Meta’s new chip designs haven’t enabled it to stop buying Nvidia infrastructure.
Meta has legacy businesses across Facebook, Instagram, WhatsApp, and its Reality Labs segments, but it is also building its own Llama family of large language models (LLMs). So, Meta may be deploying Nvidia for its LLM efforts and frontier AI research, while the homegrown chips can more efficiently serve its legacy business footprint with optimized solutions.
But the big picture is that AI computing demand is still growing exponentially. That means the emergence of new inference chipmakers won’t cause traditional training-focused GPUs to decline; rather, these new types of chips should be incremental and not displace Nvidia GPUs.
In this case, it appears the rising tide of AI compute truly lifts all boats.