Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI Computing Power Demand Continues to Surge! South Korea's "AI Squid Game" in Full Swing Sparks Demand for Tens of Thousands of AMD(AMD.US)AI Chips
Bloomberg Finance APP learned that, according to media reports, South Korean AI startup Upstage is in talks with U.S. high-performance chips leader AMD (AMD.US) to purchase its latest 10,000 AI accelerators. This move is an important part of their effort to introduce larger-scale AI computing infrastructure into the Korean market.
This Korean AI startup is purchasing a full 10,000 AMD AI chips. Additionally, not long ago, AMD reached significant collaborations with tech leaders like Tianhong Technology (CLS.US) and Huiyu Technology (HPE.US) to jointly accelerate the ramp-up of AMD’s Helios rack-scale AI computing infrastructure capacity. This indicates that AMD’s AI computing solutions are gaining more positive market recognition and are expected to continue encroaching on the AI chip superpower Nvidia’s (NVDA.US) dominant 90% share in the multi-trillion-dollar AI core computing cluster field.
Upstage CEO Sung Kim told the media that last week, during a meeting in Seoul with Lisa Su, AMD’s CEO, they discussed purchasing AMD MI355 AI accelerator products. Kim stated in a Monday interview, “We have many Nvidia chips in the Korean market, but we want to diversify our supply and shift towards other AI chip providers, including AMD.”
It is understood that Upstage is one of four teams participating in a Korea government-supported super AI competition to select the country’s top national-level large AI models. The event is called “AI Squid Game,” borrowing the name from a popular survival drama created by Netflix Korea, and is an important part of Korea’s ambition to become a top global AI powerhouse.
Under the supervision of the Korea Ministry of Science and ICT, a professional judging panel will evaluate and eliminate these teams every six months. Korea plans to select two teams to advance to the finals before early next year. The winners will gain access to more Nvidia AI GPU computing infrastructure.
Kim said that Upstage is currently preparing a super-large language model with about 200 billion parameters for an important upcoming competition this summer. He added that the advantage of this Korean AI startup lies in combining scale effects with high-efficiency processing modes to build high-performance large AI models at relatively lower costs—an approach aimed at competing with high-cost-performance AI large models from China and the U.S.
Upstage is a leading Korean AI startup focused on large AI models and enterprise AI software solutions. It holds two prominent positions in the industry: first, as one of four teams in the Korean government-supported “Sovereign AI Foundation Model” competition; second, it disclosed that by 2024, its total funding has exceeded $100 million, making it the most funded AI large model company in Korea’s history. The startup not only develops general-purpose AI large models but also actively invests in enterprise Document AI and LLM+Sovereign AI overseas. Official data shows that this AI startup emphasizes “AI+” scenarios in finance, insurance, healthcare, and high-end manufacturing.
Kim also mentioned that he is considering targeting Asian countries like Vietnam and the UAE as major potential markets, offering sovereign-level AI training/inference systems deployable within their borders.
AMD Enters the Rack-Scale AI Infrastructure Era! Striving to Expand Helios Cluster Capacity
Upstage is negotiating to purchase 10,000 MI355 chips from AMD. Additionally, its CEO explicitly stated that Korea already has a large number of Nvidia chips but hopes to “diversify” deployment strategies to include AMD. This indicates that AMD has begun shifting from being an “optional AI GPU alternative” to a serious consideration for large-scale AI infrastructure deployment by some customers.
Last week, media reported that AMD announced a deep partnership with Tianhong Technology to bring its new Helios rack-scale AI infrastructure platform to the global AI data center market, targeting Nvidia’s NVL72 rack-scale AI platform. These two pieces of news together suggest that AMD’s AI cluster solutions are gaining more market recognition. More importantly, Helios is not just about single cards but a rack-level story: AMD has elevated the competition from individual GPUs to 72-card rack systems, network interconnects, and integrated CPU+GPU+NIC platforms, and has partnered with Tianhong Technology to accelerate mass production and deployment.
As AMD and Celestica work together to bring Helios rack-scale AI platforms to market, they are doing so amid a broader industry effort where AMD and other tech giants are teaming up to counter Nvidia’s dominance in integrated AI infrastructure solutions. Previously, AMD announced collaborations with Huiyu Technology and Broadcom to provide open, rack-scale AI infrastructure for high-performance computing clusters and large AI data centers, aiming to accelerate global “Sovereign AI” research.
Helios is expected to be available to customers later in 2026. Meta (Facebook’s parent company) has signed a multi-year, multi-generation deep cooperation agreement with AMD to deploy up to 6 GW of AMD Instinct GPU clusters, with the first GW-level deployment expected to begin in late 2026. OpenAI is also involved in the design optimization of AMD MI450. Coupled with Korea’s Upstage’s strong demand for “sovereign AI/local large-scale computing,” we see not just individual AI hardware orders but a trend: more clients are reluctant to rely solely on a single supplier for AI infrastructure. AMD is well-positioned to meet this “second core alternative + open standards + reducing vendor lock-in” AI computing demand.
“King of the Hill” AMD Stock Price Set to Enter a New Bull Market Cycle?
Undoubtedly, these latest positive catalysts are strong short- to medium-term drivers for AMD’s stock price outlook. AMD has transformed from a follower in the AI chip market to a competitor in AI training/inference system-level infrastructure. The company has set aggressive targets for 2025, including reaching $100 billion in annual revenue from data center chips and over 80% annual CAGR in data center AI computing revenue.
CEO Su Zifeng also projected that by 2030, the total AI data center market (TAM) will surpass $1 trillion, far exceeding the approximately $200 billion forecast for 2025, implying a CAGR of over 40%. Regarding overall profits, Su expects the company’s EPS to reach $20 within three to five years.
Wall Street giants like Citigroup analysts see AMD as the “King of the Hill,” with a 12-month target price of up to $260. According to TipRanks, the average analyst target price is an astonishing $285, suggesting a potential upside of 42% over the next year. As of last Friday’s US stock market close, AMD’s stock price was $201.33.
Nvidia CEO Jensen Huang showcased Nvidia’s “unprecedented AI computing revenue super blueprint” at the GTC conference early morning on March 17 Beijing time. He told global investors that under the strong demand for Blackwell architecture GPUs and the upcoming mass production of Vera Rubin architecture AI systems, Nvidia’s future revenue in AI chips could reach at least $1 trillion by 2027, far surpassing the $500 billion AI infrastructure blueprint announced at the last GTC in 2022.
As model sizes, inference chains, and multi-modal/agentic AI workloads drive exponential increases in computing demand, tech giants’ capital expenditure focus is increasingly on AI infrastructure. Global investors continue to see Nvidia, Google TPU clusters, and AMD’s new iterations and AI cluster deliveries as part of the “AI bull market narrative,” making it one of the most certain growth stories in the stock market. This also means that investments in power supplies, liquid cooling systems, optical interconnects, and related supply chains—closely tied to AI training and inference—will remain among the hottest sectors, alongside Nvidia, AMD, Broadcom, TSMC, and Micron, even amid geopolitical uncertainties in the Middle East.
Major Wall Street firms like Morgan Stanley, Citigroup, Loop Capital, and Wedbush believe that the global AI infrastructure investment wave centered on AI hardware is far from over. We are only at the beginning. Driven by an unprecedented “AI inference demand storm,” the scale of this global AI infrastructure investment cycle through 2030 could reach $3 trillion to $4 trillion.