Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google DeepMind open-sources the Gemma 4 multimodal model family
ME News Report, April 3rd (UTC+8), Google DeepMind has recently open-sourced the Gemma 4 multimodal model family. The series of models support text and image inputs (small models also support audio), generate text outputs, and include pre-training and instruction tuning variants. They feature a context window of up to 256K tokens and support over 140 languages. The models utilize both dense and mixture of experts (MoE) architectures, with four sizes: E2B, E4B, 26B A4B, and 31B. Their core capabilities include high-performance inference, scalable multimodal processing, device-side optimization, expanded context windows, enhanced encoding and agent capabilities, and native system prompt support. Technically, the models employ a hybrid attention mechanism, with global layers using unified key-value pairs and scaled RoPE (p-RoPE). Notably, the E2B and E4B models use layer-wise embedding (PLE) technology, resulting in fewer effective parameters than the total parameters. Meanwhile, the 26B A4B MoE model activates only 3.8B parameters during inference, achieving speeds close to a 4B parameter model. (Source: InFoQ)