Meta plans to deploy the fourth-generation self-developed AI chips by the end of 2027 to meet computing power demands and reduce external dependencies.

robot
Abstract generation in progress

Tech Home, March 11 — According to Bloomberg, Meta plans to deploy its internally developed fourth-generation AI chips by the end of 2027. The company is customizing chips to support its rapidly expanding AI business with computing power.

On Wednesday, local time, Meta officially announced plans for these new chips — MTIA 300, MTIA 400, MTIA 450, and MTIA 500. This move aims to diversify hardware supply, reduce reliance on external chip manufacturers, and control costs in the highly competitive and expensive AI race. Meta will also continue purchasing chips from other companies, recently announcing agreements with NVIDIA and AMD to spend billions of dollars on AI hardware.

Meta stated that the MTIA 300 has already entered mass production for content ranking and recommendation model training; MTIA 400 (codenamed Iris) has completed lab testing and will be deployed soon. The MTIA 450 and MTIA 500 chips (codenamed Arke and Astrid respectively) are planned for large-scale deployment in 2027.

Yee Jiun Song, Vice President of Engineering at Meta, said these products are being developed simultaneously. The MTIA 450 is expected to launch in early 2027, and the MTIA 500 will be released six months later.

“Looking back at the development of AI as a whole, even in the past two or three months, the industry has advanced faster than anyone could have imagined,” said Song. “Chip development must keep pace with the iteration of computing power demands, so we continuously review our technology roadmap to ensure we create the most practical products.”

Meta is heavily investing in developing competitive AI models and products, which has created unprecedented demand for computing power. The company relies on NVIDIA and AMD to support some AI projects while expanding its chip design talent team to develop in-house products.

Last year, due to CEO Mark Zuckerberg’s impatience with the company’s self-developed progress, Meta attempted to acquire South Korean startup FuriosaAI. After the offer of $800 million (Note: approximately 5.501 billion RMB at current exchange rates) was rejected, Meta instead acquired Rivos Inc., a startup based in Santa Clara, California, and integrated its over 400 employees.

The new personnel bolster Meta’s in-house chip team (the Meta Training and Inference Accelerator team, MTIA), which is working on multiple projects. MTIA focuses on creating more efficient computing architectures for internal needs, from determining the ranking and recommendation system for Instagram’s feed content to large-scale generative AI inference (generating text or images based on trained models).

Although Meta executives repeatedly emphasize the advantages of self-developed chips, the company is also one of the world’s largest purchasers of graphics processing units (GPUs), mainly used for training and running AI models. Recently, Meta signed agreements worth hundreds of billions of dollars with NVIDIA and AMD to secure massive AI computing power for the coming years.

This strategy reflects Meta’s dual approach: on one hand, sourcing traditional hardware from industry partners; on the other, continuously investing in custom chips tailored for specific tasks on the Meta platform.

“We are not developing for the general-purpose market, so our chips don’t need to be versatile across all scenarios,” said Song. “We can eliminate unnecessary features to effectively reduce costs.”

However, the costs and difficulties of chip manufacturing remain enormous. A product from design to third-party production (usually TSMC) often costs billions of dollars and takes years. Song said his team typically needs about two years to go from design to mass production. Custom chips usually only become profitable when used at large scale and high utilization.

Last month, The Information reported that Meta canceled its top-tier AI training chip project codenamed Olympus due to design difficulties, shifting to a less complex version. A Meta spokesperson declined to comment on the report but said the company regularly evaluates and updates its chip roadmap and learns from product deployment experiences.

Meta CFO Susan Li said earlier this month at a Morgan Stanley conference that the company remains committed to developing processors for training AI models.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin