🚀 Gate Square “Gate Fun Token Challenge” is Live!
Create tokens, engage, and earn — including trading fee rebates, graduation bonuses, and a $1,000 prize pool!
Join Now 👉 https://www.gate.com/campaigns/3145
💡 How to Participate:
1️⃣ Create Tokens: One-click token launch in [Square - Post]. Promote, grow your community, and earn rewards.
2️⃣ Engage: Post, like, comment, and share in token community to earn!
📦 Rewards Overview:
Creator Graduation Bonus: 50 GT
Trading Fee Rebate: The more trades, the more you earn
Token Creator Pool: Up to $50 USDT per user + $5 USDT for the first 50 launche
Professor Zhang Qi from Fudan University: It is crucial to recognize the boundary of large language models' capabilities
On September 25, Golden Ten Data reported that the special forum “Decoding the Future: Global Digital Intelligence Trends” was successfully held in Pudong, Shanghai. Zhang Qi, a professor at the School of Computer Science and Technology of Fudan University, delivered a keynote speech on the topic of “Thinking on the Capability Boundary and Development of Large Language Models”. Zhang Qi believes that it will be of great value to combine the core capabilities of long context modeling, multi-task learning, cross-language transferability, and text production capacity with scenarios. For the large model of the future, it is important to have a clear understanding of the boundaries of its capabilities. If there is a consensus on the capability boundary, you will have a good grasp of the strength and direction of future investment. But it may take some time, maybe a year, or two years to complete the confirmation (cognitive boundary).