Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google develops an "analog of Pied Piper" for an eightfold acceleration of neural network computations - ForkLog: cryptocurrencies, AI, singularity, the future
Google’s research division introduced TurboQuant — a memory compression algorithm for artificial intelligence. Users compared the development to Pied Piper technology from the series “Silicon Valley.”
TurboQuant significantly reduces resource requirements for large language models and vector search systems.
AI operates with complex multidimensional arrays that store information about words or images. These data take up a lot of cache space and slow down response generation. Traditional compression methods require storing additional variables, often negating the benefits of optimization.
TurboQuant solves the memory overhead problem with two mechanisms. The first algorithm converts vectors into polar coordinates and compresses the main data. The second acts as a mathematical controller, using just one bit of memory to eliminate residual hidden errors.
Cloudflare CEO Matthew Prince compared the algorithm to Chinese model DeepSeek, which previously demonstrated high efficiency with minimal hardware costs.
Developers tested the technology on open models Llama, Gemma, and Mistral. The algorithm compressed the cache to three bits without losing answer quality. Memory consumption was reduced by at least six times, and processing speed on H100 GPUs increased eightfold.
The innovation requires no additional neural network training. According to the company, the technology will be integrated into search algorithms and their own AI products, including Gemini. The public presentation of the project will take place at the ICLR and AISTATS conferences in 2026.
Recall that on March 25, Google announced its plans to transition to post-quantum cryptography.