Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
CITIC Securities: recommends focusing on leading domestic AI PCB / copper clad laminate (CCL) manufacturers, memory manufacturers, and others.
CITIC Securities Research Report states that NVIDIA announced at GTC 2026 that AI computing power demand will continue to grow strongly in 2027. CITIC Securities believes that the addition of LPU and midplane in the Rubin/Rubin Ultra architecture, with increased specifications and usage, will further drive demand expansion, benefiting AI PCBs; CPO is expected to be the first to be implemented in Rubin’s scale-out architecture, with scale-up applications anticipated on the Feynman platform starting in 2028. We are optimistic that GTC 2026 will further strengthen market confidence in the sustained growth of the AI industry and the realization of incremental logic. It is recommended to focus on leading domestic AI PCB / copper-clad laminate (CCL) manufacturers, storage vendors, and others.
Full Text Below
Electronics | NVIDIA GTC 2026 Review: Photonics Progress Together
NVIDIA stated at GTC 2026 that AI computing power demand will remain strong in 2027. We believe that the addition of LPU and midplane in the Rubin/Rubin Ultra architecture, with increased specifications and usage, will further expand demand, and AI PCBs will benefit significantly; CPO is expected to be first deployed in Rubin’s scale-out architecture, with scale-up applications starting on the Feynman platform in 2028. We are optimistic that GTC 2026 will further reinforce market confidence in the continuous growth of the AI industry and the realization of incremental growth logic. We suggest paying attention to leading domestic AI PCB / copper-clad laminate (CCL) manufacturers, storage vendors, and others.
▍ NVIDIA expects order demand to grow to $1 trillion by 2027.
On March 16, during the keynote at GTC 2026, CEO Jensen Huang forecasted that orders for Blackwell and Rubin would reach $500 billion in 2026; the company expects total order demand to reach $1 trillion by 2027. Currently, 60% of NVIDIA’s business comes from the top five hyperscale cloud service providers worldwide, with the remaining 40% spread across regional clouds, sovereign clouds, enterprise, industrial, robotics, and edge computing sectors. According to recent financial reports from North American CSPs, in Q4 2025, the overall performance of North American tech giants continued to outperform market expectations, with cloud revenue growth accelerating further. Tight supply-demand dynamics and rising storage chip prices have driven capital expenditure guidance for 2026 well above expectations. Our estimates suggest that in 2026, CAPEX for North American CSPs will increase by 58% year-over-year, and AI CAPEX by 117%, supporting the realization of AI computing chip performance.
▍ NVIDIA launches five rack-scale Vera Rubin computing platforms.
NVIDIA introduced its new Vera Rubin platform/POD, combining five rack-scale computing systems into an AI supercomputer to support efficient reasoning for Agentic AI. Specifically: 1) Vera Rubin NVL72 rack offers 3.6 exaflops (FP4) inference power, five times that of Blackwell; 2) Vera CPU rack mainly manages scheduling and Agentic workflow; 3) Groq 3 LPX rack (equipped with 256 Groq 3 LPU units) acts as a token inference accelerator, working with Vera Rubin NVL72, leveraging large-scale on-chip SRAM for efficient inference; 4) BlueField-4 STX rack supports storage needed for long-context reasoning in Agentic AI; 5) Spectrum-X CPO switch rack is used for scale-out and is fully mass-produced.
▍ PCB applications confirmed, including orthogonal backplanes, LPX motherboards, and CPU chassis motherboards.
NVIDIA officially announced the Rubin Ultra NVL144 Kyber rack, composed of 144 GPUs in a single NVLink domain, with compute nodes and NVLink switches inserted from opposite sides. The Kyber rack (NVLink 144) extends via Oberon copper/optical cables to NVLink 576. According to SemiAnalysis, using PCB orthogonal backplanes enables high-density, high-speed signal transmission, reduces signal loss, and simplifies cable routing. The process will adopt ultra-high multi-layer designs, with materials potentially utilizing cutting-edge M9. We estimate that PCB orthogonal backplanes could increase single-GPU ASP by over $200; NVIDIA confirmed that the Groq LPU chip LP30 will be manufactured by Samsung, now in mass production, expected to ship in Q3 2026 as part of LPX cabinets. NVIDIA suggests that if there is high code generation or fast token demand, 25% of computing power can be allocated to Groq, with the remaining 75% for Vera Rubin. SemiAnalysis indicates that LPX cabinet motherboards may use 50L+ high multi-layer and M9 CCL designs, potentially raising single-GPU ASP by hundreds of dollars. Additionally, NVIDIA confirmed the launch of CPU chassis using PCB motherboards, further expanding future AI PCB growth potential.
▍ Feynman platform adopts a new chip with deep heterogeneous integration; scale-up solutions support copper cables and CPO.
NVIDIA’s roadmap shows that the Feynman architecture will launch in 2028, integrating CPU (Rosa), GPU (Feynman), and LPU (LP40) through deep hardware heterogenous integration. The platform will support both copper cable and CPO interconnects. Specifically: 1) GPUs will use TSMC’s A16 (1.6nm) process; 2) Rosa CPUs will more efficiently manage token flow between GPUs, storage, and networks, optimizing complex logical decision tasks; 3) LP40 (LPU), integrating NVIDIA’s GPU and Groq technology, aims to fundamentally address inference latency and the “memory wall” challenge at the microarchitecture level; 4) the network will support both copper cables and CPO via Kyber racks.
Risk Factors:
Macroeconomic fluctuations and geopolitical risks, overseas leading AI product launches falling short of expectations, AI market demand growth slower than anticipated, rising costs of storage and other components, technological change and product iteration risks, policy regulation and data privacy risks, increased competition in the PCB industry.
Investment Strategy:
Under the continued surpassing of expectations in global computing demand, upstream sector prosperity and price increases are expected to persist. The inflation of the computing chain remains the most certain “prosperity growth” theme for current tech sector allocations. We are optimistic that GTC 2026 will further strengthen market confidence in the sustained growth and incremental logic realization of the AI industry.
(Source: Securities Times)