Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Zhipu, the Dark Side of the Moon, and Xiaomi's Roundtable Conference: Large Models Truly Start "Working," but Computing Power Remains the Biggest Bottleneck
Yang Zhilin hosted, Luo Fuli and Zhang Peng shared insights, and this “lobster session” thoroughly discussed the future of AI.
Author: Chen Junhua
According to a report by Zhidu, on March 27, during the Zhongguancun Forum, Zhipu CEO Zhang Peng, CEO of Moonshadow Yang Zhilin (serving as host), Xiaomi MiMo large model head Luo Fuli, CEO of Wuwen Xinqiong Xia Lixue, and Assistant Professor Huang Chao from the University of Hong Kong rarely shared the stage to engage in an in-depth dialogue about the future direction of open-source large models and intelligent agents.
The dialogue began with the currently popular OpenClaw, and the guests unanimously agreed that intelligent agents allow large models to truly start “working.” OpenClaw expands the capability boundaries of large models, but it also imposes higher requirements on the models. Zhipu is researching long-term planning, self-debugging, and other capabilities, while Luo Fuli’s team focuses more on reducing costs and increasing speed through architectural innovation, even achieving model self-evolution.
Infrastructure must also keep pace with the rhythm of intelligent agents. Xia Lixue believes that the current computing systems and software architectures are still designed for human use, not for intelligent agents, and that the limitations of human operational capabilities restrict the potential of agents. Therefore, we need to build Agentic Infra.
In the eyes of many guests, open-source is one of the core driving forces behind the development of large models and intelligent agents. Assistant Professor Huang Chao from the University of Hong Kong believes that the prosperity of the open-source ecosystem is key to the transition of intelligent agents from “playing around” to becoming true “workers.” Only through community co-construction can software, data, and technology fully shift to the native form of intelligent agents, ultimately forming a sustainable global AI ecosystem.
In addition, several guests discussed topics such as the price increase of large models, the explosive use of tokens, and keywords for AI in the next 12 months. Here are the core viewpoints from this roundtable forum:
Zhang Peng: As models grow larger, the reasoning costs will also increase accordingly. Recently, Zhipu’s price increase strategy actually returns to normal business values. Long-term low-price competition is detrimental to industry development.
Zhang Peng: The explosion of new technologies like intelligent agents has led to a tenfold increase in token usage, but actual demand may grow a hundredfold, with still a large amount of unmet demand. Therefore, computing power remains a key issue in the next 12 months.
Luo Fuli: From the perspective of foundational large model vendors, OpenClaw guarantees the lower limit of foundational large models and raises the upper limit. The completion rate of tasks using domestic open-source models combined with OpenClaw is already very close to Claude.
Luo Fuli: DeepSeek has given domestic large model vendors courage and confidence. Some structural innovations that seem to be “compromises for efficiency” have sparked real change, allowing the industry to achieve the highest level of intelligence under certain computing power.
Luo Fuli: The most important thing in the coming year of AGI development is “self-evolution.” Self-evolution allows large models to explore like top scientists and is the only place that can “create new things.” Xiaomi has already improved research efficiency by ten times using Claude Code + top models.
Xia Lixue: When the AGI era arrives, the infrastructure itself should be an intelligent agent, autonomously managing the entire infrastructure and iterating based on AI customer needs to achieve self-evolution and self-iteration.
Xia Lixue: OpenClaw has exploded token usage. The current rate of token consumption feels like the early days of 3G when mobile data was just starting, with a monthly limit of only 100MB.
Huang Chao: In the future, many software applications will not be human-oriented; software, data, and technology will all be programmed in an Agent-Native form. In the future, humans may only need to use those GUIs that “make them happy.”
Here is the complete transcript of the roundtable forum:
01. OpenClaw is the “scaffolding,” large model token consumption is still in the 3G era
Yang Zhilin: It is an honor to invite several heavyweight guests today, who come from the model layer, computing power layer, and agent layer. The main keywords today are open-source and agent.
The first question is about the currently popular OpenClaw. What aspects of using OpenClaw or similar products do you find most imaginative or impressive? From a technical perspective, how do you view the evolution of OpenClaw and related agents today?
Zhang Peng: I started playing with OpenClaw a long time ago when it was still called Clawbot. I tinkered with it myself since I also come from a programming background, so I have some personal experience with these things.
I think the biggest breakthrough or novelty that OpenClaw brings is that it is no longer just the domain of programmers or geeks. Ordinary people can also conveniently use the capabilities of top models, especially in programming and intelligent agents.
So up to now, during my exchanges with everyone, I prefer to refer to OpenClaw as a “scaffold.” It provides a possibility, building a strong, convenient, yet flexible scaffold on top of the models. Everyone can use many novel functions provided by the underlying models according to their own wishes.
Previously, my ideas might have been limited by my inability to write code or lack of other related skills, but with OpenClaw, I can finally complete them through simple exchanges.
OpenClaw has had a tremendous impact on me, or rather, it has made me re-evaluate this matter.
Xia Lixue: Actually, when I first used OpenClaw, I wasn’t very accustomed to it because I was used to chatting with large models in a certain way, and I felt OpenClaw was slow to respond.
But later I realized that it differs greatly from previous chatbots; at its core, it is a “person” that can help me complete large tasks. I began to submit more complex tasks to it and found that it could actually perform well.
This realization had a significant impact on me. The model initially chatted based on tokens, but now it can become an agent, a lobster, helping you complete tasks. This greatly enhances the overall imaginative space for AI.
At the same time, it also raises the requirements for the entire system’s capabilities. This is why I felt OpenClaw was a bit sluggish at first. As an infrastructure vendor, I see that OpenClaw brings more opportunities and challenges to the large systems and ecosystems behind AI.
All the resources we currently have are insufficient to support such a rapidly growing era. For example, in our company, since the end of January, our token usage has roughly doubled every two weeks, and it has now increased by about ten times.
The last time I saw this kind of speed was when I was consuming data on a 3G mobile phone. I feel that the current token usage is reminiscent of the time when there was only 100MB of mobile data per month.
In this situation, all our resources need to be optimized and integrated better. We need to enable everyone, not just in the AI field but every person in society, to utilize the AI capabilities of OpenClaw.
As a player in the infrastructure space, I am very excited and deeply moved by this era. I also believe there are many optimization spaces here that we should still explore and attempt.
02. OpenClaw raises the upper limit of domestic models, and the breakthrough in interaction mode is significant
Luo Fuli: I view OpenClaw as a revolutionary and disruptive event in the evolution of agent frameworks.
In fact, everyone around me who is deeply coding still prefers Claude Code as their first choice. However, I believe that those using OpenClaw will feel that its design in the agent framework is ahead of Claude Code. Many recent updates to Claude Code are actually moving closer to OpenClaw.
My experience using OpenClaw is that this framework brings me more in terms of imaginative expansion anytime and anywhere. Initially, Claude Code could only extend my creativity on my desktop, but OpenClaw allows me to extend my creativity anytime and anywhere.
The core value OpenClaw brings mainly has two points. The first is that it is open-source. This open-source nature is very beneficial for the entire community to participate deeply, value, and promote the evolution of this framework, which is a crucial prerequisite.
For AI frameworks like OpenClaw, I think a significant value is that it raises the upper limit of domestic models that, although close to closed-source models, have not yet fully caught up.
In most scenarios, you will find that the task completion rate of the combination of domestic open-source models and OpenClaw is already very close to Claude’s latest model. At the same time, it effectively guarantees the lower limit—through a Harness system or by leveraging its Skills framework and various designs, it ensures the completeness and accuracy of tasks.
To summarize, from the perspective of developers from foundational large model vendors, OpenClaw guarantees the lower limit of foundational large models and raises the upper limit.
Additionally, I believe another value it brings to the entire community is that it ignites everyone’s awareness, making them realize that beyond large models, there lies a substantial imaginative space within the agent layer.
I have also observed recently that, apart from researchers, more and more people are starting to participate in the AGI transformation, engaging with more powerful agent frameworks like Harness and Scaffold. These individuals are, in a sense, using these tools to replace parts of their work while also freeing up their time to invest in more imaginative endeavors.
Huang Chao: I think, first of all, regarding interaction modes, one reason OpenClaw is so popular this time may be that it provides a more “human-like” experience. We have been working on agents for a year or two, but previously, agents like Cursor and Claude Code felt more like “tools.” OpenClaw, for the first time, embedded in a way akin to “instant messaging software,” gives a closer feeling to what people envision as a “personal Jarvis.” I think this could be a breakthrough in interaction mode.
Another point it inspires for the entire community is that simple yet efficient frameworks like Agent Loop have been proven viable again. At the same time, it prompts us to rethink: do we need a versatile super-intelligent agent that can do everything, or do we need a better “little housekeeper,” like a lightweight operating system or scaffold?
The thought brought by OpenClaw is that through this “little system” or “lobster operating system” and its ecosystem, everyone can truly adopt a mindset of “playing with it,” thereby leveraging all tools within the ecosystem.
With the emergence of capabilities like Skills and Harness, more and more people can design applications aimed at systems like OpenClaw, empowering various industries. I think this is naturally very closely tied to the entire open-source ecosystem. In my view, these two aspects are our greatest inspirations.
03. GLM’s new model is built for “working,” and the price increase reflects a return to normal business value
Yang Zhilin: I want to ask Zhang Peng. Recently, I saw Zhipu released the new GLM-5 Turbo model, which I understand has made significant enhancements in agent capabilities. Can you introduce the differences between this new model and others? Also, we’ve observed a pricing strategy; what market signals does this reflect?**
Zhang Peng: That’s a great question. A couple of days ago, we did indeed make an urgent update, which is actually a phase in our overall development roadmap, just released earlier.
The main purpose is to shift from “simple dialogue” to “truly working”—this is something everyone has been feeling recently: large models are no longer just capable of chatting; they can actually help people work.
However, the capabilities implied behind “working” are very high. The model needs to plan long-term tasks, continually try and adjust, compress context, debug, and may also need to handle multimodal information. Thus, the requirements for model capabilities differ from traditional general-purpose models focused on dialogue. GLM-5 Turbo has specifically strengthened these aspects, especially regarding your point—making it work, running for seventy-two hours, and how to keep looping; we have done a lot of work here.
Moreover, everyone is also concerned about token consumption. To let a smart model tackle complex tasks, the token consumption will be enormous. Ordinary people may not perceive it deeply, but when looking at the bills, they will notice the money disappears quickly. Therefore, we have optimized this aspect, enabling the model to complete complex tasks with more efficient token usage. Overall, the model architecture remains a multi-task collaborative general architecture but with a focus on enhancing capabilities.
The price increase can be easily explained. As mentioned earlier, it is no longer just a matter of asking a question and getting an answer; the underlying reasoning chain is very long. Many tasks require interaction with code and underlying infrastructure, and must constantly debug and correct errors, resulting in significant consumption. The amount of tokens needed to complete a complex task could be ten times or even a hundred times that of answering a simple question.
Therefore, a price increase is necessary; as the model has grown larger, the reasoning costs have correspondingly increased. We are bringing it back to normal business value because long-term reliance on low-price competition is not conducive to the development of the entire industry. This also allows us to create a positive feedback loop for commercialization, continuously optimizing model capabilities and providing better services.
04. Building a more efficient token factory, infrastructure itself should also be an agent
Yang Zhilin: With the increasing number of open-source models forming an ecosystem, various models can provide more value to users on different computing power platforms. As token usage explodes, large models are transitioning from the training era to the reasoning era. I would like to ask Lixue, from the perspective of infrastructure, what does the reasoning era mean for Wuwen?**
Xia Lixue: We are an infrastructure vendor born in the AI era, currently supporting Zhipu, Kimi, Mimo, and others, enabling everyone to utilize the token factory more efficiently. We are also collaborating with many universities and research institutes.
Thus, we have been contemplating a question: What kind of infrastructure is needed in the AGI era? And how do we gradually realize and simulate it? We are well-prepared to address issues at various stages—short, medium, and long-term.
The most immediate issue is the explosive increase in token volume driven by Open, which demands higher optimization of system efficiency. The price adjustments are, in fact, a response to this demand.
We have always been laying out and solving problems through the integration of hardware and software. For example, we have connected nearly all types of computing chips, unifying dozens of different chips and computing clusters in China. This can solve the issue of computing power resource scarcity in AI systems. When resources are insufficient, the best approach is to utilize all available resources first and ensure that each computing power is maximally effective.
Thus, at this stage, what we need to solve is how to build a more efficient token factory. We have made many optimizations, including making the model and various resources like hardware memory optimally compatible, and we are also exploring whether the latest model structures and hardware structures can produce deeper chemical reactions. However, solving the current efficiency problem is merely the creation of a standardized token factory.
Looking toward the Agent era, we believe this is still not enough. Because agents are more like people; you can assign them a task. I firmly believe that much of the current cloud computing infrastructure is designed to serve a program and human engineers, not AI. This is akin to creating an infrastructure with interfaces for human use and then layering it to accommodate agents; this method actually limits the potential of agents by relying on human operational capabilities.
For example, an agent can think and initiate tasks at the millisecond level, but underlying capabilities like Kubernetes (K8s) are not prepared for this, as human task initiation typically occurs at the minute level. Thus, we need further capabilities, which we call “Agentic Infra,” or “intelligent token factories,” and this is what Wuwen Xinqiong is working on.
Looking further ahead, when the true AGI era arrives, we believe that even the infrastructure itself should be an intelligent agent. The factory we are building should also be capable of self-evolution and self-iteration, forming a self-governing organization. It’s analogous to having a CEO, which in this case is an agent, possibly OpenClaw, managing the entire infrastructure and autonomously iterating based on AI client needs. This way, AI can couple better with AI. We are also exploring ways to enable better communication between agents and capabilities like Cache to Cache.
Therefore, we have always been thinking that the development of infrastructure and AI should not be an isolated state—responding to demands as they come—but should generate rich chemical reactions. This is the true meaning of soft-hard synergy and the collaboration between algorithms and infrastructure, which has always been the mission of Wuwen Xinqiong. Thank you.
05. Innovations that “compromise for efficiency” are also meaningful; DeepSeek gives domestic teams courage and confidence
Yang Zhilin: Next, I want to ask Fuli. Xiaomi recently made significant contributions to the community through the release of new models and the technology behind open-sourcing. What do you think are Xiaomi’s unique advantages in developing large models?**
Luo Fuli: I think we can set aside the topic of Xiaomi’s unique advantages and instead discuss the overall advantages of teams in China working on large models. I believe this topic has broader value.
About two years ago, China’s foundational model teams began achieving significant breakthroughs—how to overcome the limitations of low-end computing under limited computing power, especially with some bandwidth constraints of NVLink, while making structural innovations that seem to be “compromises for efficiency,” such as the DeepSeek V2 and V3 series, as well as MoE and MLA.
However, we later saw that these innovations triggered a transformation: under certain computing power, how to achieve the highest level of intelligence. This is the courage and confidence that DeepSeek has instilled in all domestic foundational model teams. While our domestic chips, especially inference and training chips, are no longer subject to these limitations, it was these constraints that spurred our new explorations for higher training efficiency and lower inference costs in model structures.
Recent structures like Hybrid Sparse and Linear Attention, for example, DeepSeek’s NSA and Kimi’s KSA, as well as Xiaomi’s HySparse for next-generation structures, all differentiate from the MoE generation and are aimed at the agent era.
Why do I find structural innovation so important? In fact, if you truly use OpenClaw, you’ll find that it becomes easier and smarter the more you use it. One prerequisite is the length of inference context. Long context has been a topic we’ve discussed for a long time, but now is there really a model that performs well in long contexts with strong performance and low inference costs?
Many models can handle 1M or 10M contexts, but the cost of inferring 1M or 10M is too high and the speed is too slow. Only by reducing costs and increasing speed can genuinely high productivity tasks be assigned to the model, enabling the completion of more complex tasks in such long contexts, or even achieving the model’s self-iteration.
The so-called self-iteration of a model means it can rely on ultra-long contexts to complete its evolution in a complex environment. This evolution may pertain to the agent framework itself or the model parameters—because I believe that the context itself is a form of evolution of the parameters. Thus, how to implement a long context architecture and achieve efficient inference in long contexts is a comprehensive competition.
Besides the long-context-efficient architecture we explored a year ago during the pre-training phase, now achieving stability and high upper limits on long-term tasks is an innovative paradigm we are iterating during the post-training phase.
We are considering how to construct more effective learning algorithms, how to gather real-world data that truly maintains long-term dependencies in 1M, 10M, and 100M contexts, and how to combine it with trajectory data generated in complex environments. This is what we are working on in post-training.
However, in the long term, due to the rapid advancement of large models and the support of agent frameworks, as Lixue mentioned, the demand for inference has nearly increased tenfold in the past period. Will the growth of token usage this year reach a hundredfold?
This enters another dimension of competition—computing power, or inference chips, and even down to energy. Therefore, I believe that if everyone thinks about this question together, I will learn more from everyone. Thank you.
06. Agents have three key modules, and the explosion of multi-agents will bring impact
Yang Zhilin: Very insightful sharing. Next, I want to ask Huang Chao. You have developed influential agent projects like Nanobot and have many community fans. From the perspective of agent Harness or application layers, what important technical directions do you think are worth everyone’s attention in the future?**
Huang Chao: I believe if we abstract the technology of agents, the key modules are Planning, Memory, and Tool Use.
First, let’s discuss Planning. The current problem mainly lies in long-term tasks or very complex contexts, for example, with 500 steps or even longer; many models may not perform well in planning. I feel that at its core, the model may lack this kind of implicit knowledge, especially in some complex vertical fields. Therefore, in the future, we may need to solidify knowledge of various complex tasks within the model, and this could be one direction.
Of course, Skills and Harness are alleviating the errors brought by Planning to some extent because they provide high-quality Skills, which essentially guide the model to complete some more challenging tasks.
Next, let’s talk about Memory. The feeling about Memory is that it seems to always face problems with inaccurate information compression and retrieval. Particularly in long-term tasks and complex scenarios, the pressure on Memory can surge. Currently, projects like OpenClaw use the simplest file system-style Markdown format for Memory, relying on shared files. In the future, Memory may move toward a layered design and should also become more universal.
To be honest, the current Memory mechanisms are challenging to generalize—because coding scenarios, Deep Research scenarios, and multimodal scenarios have vastly different data modalities. How to effectively retrieve and index those Memories while maintaining efficiency is always a trade-off.
Additionally, now that OpenClaw has significantly lowered the barrier for creating agents, in the future, there may be more than one “lobster.” I see that Kimi has mechanisms like Agent Swarm emerging, and in the future, everyone may have “a swarm of lobsters.”
Compared to a single lobster, the context explosion brought by a swarm of lobsters is imaginable, which will put enormous pressure on Memory. Currently, there isn’t a great mechanism to manage the context brought by this “swarm of lobsters,” especially in complex coding and scientific discovery scenarios, where both the model and the entire agent architecture face significant pressure.
Lastly, regarding Tool Use, which relates to Skills, the current issue with Skills is somewhat similar to the problems faced by MCP—MCP had issues with quality assurance and security risks. The same goes for Skills; while there seem to be many Skills, there are very few high-quality ones, and low-quality Skills can affect the accuracy of agents in completing tasks. There are also issues with malicious injection. Therefore, from the perspective of Tool Use, it may be necessary for the community to enhance the entire Skills ecosystem, even allowing Skills to self-evolve into new Skills during execution.
In summary, from Planning, Memory, to Tool Use, these represent some pain points currently existing in agents and possible future directions.
07. Keywords for the next 12 months: ecosystem, sustainable tokens, self-evolution, and computing power
Yang Zhilin: We can see that the two guests have discussed a common issue from different perspectives— as task complexity increases, the context will surge. From the model layer, we can enhance the native context length, and from the Agent Harness layer, mechanisms like Planning, Memory, and Multi-Agent can also support more complex tasks under specific model capabilities. I think these two directions will generate more chemical reactions, further enhancing task completion capabilities.
Finally, let’s have an open-ended outlook. Please use one word to describe the trends and expectations for the development of large models in the next 12 months. Let’s start with Huang Chao.
Huang Chao: Twelve months in AI seems very far away; I don’t even know what it will develop into then.
Yang Zhilin: Originally, it said five years, but I changed it.
Huang Chao: Right, haha. The word that comes to mind for me is “ecosystem.” OpenClaw is making everyone active, but in the future, agents really need to become “workers” rather than just something for everyone to play with for novelty. They should truly settle down and become tools for brick-moving, becoming real coworkers.
This requires the efforts of the entire ecosystem, especially open-source. After opening up technological exploration and model technology, the entire community needs to co-build—whether it’s model iteration, Skill platform iteration, or various tools, all need to create an ecosystem that is better oriented toward lobsters.
A noticeable trend is whether future software will still be designed for human use. I believe many software applications may not necessarily be human-oriented in the future—because what humans need is GUI, but in the future, it may be oriented toward Agent-native usage. Interestingly, humans might only use those GUIs that “make them happy.” Currently, the entire ecosystem is transitioning from GUI and MCP to a CLI model. This requires the ecosystem to transform software systems, data, and various technologies into Agent Native forms, enriching the entire development.
Luo Fuli: Narrowing the question down to one year is very meaningful. If it were five years, I would feel that in my mind, AGI would have already been achieved. So if I had to describe the most critical thing in the AGI journey over the next year in one sentence, I would say “self-evolution.”
This word may sound a bit mystical, and it has been mentioned multiple times over the past year. But I have recently developed a deeper understanding of it or, rather, a more pragmatic and feasible approach on how to achieve “self-evolution.” The reason is that with powerful models, we have not fully utilized the upper limits of pre-trained models in the Chat paradigm, while the Agent framework activates this upper limit. When we allow the model to execute longer tasks, we find that it can learn and evolve on its own.
A simple attempt is to add a verifiable condition constraint within the existing Agent framework, and then set up a loop for the model to continuously iterate and optimize its goals, and we will find that it can consistently come up with better solutions. This kind of self-evolution is already possible for one or two days, of course, depending on the task difficulty.
For instance, in some scientific research, such as exploring better model structures, because model structures have evaluation standards like lower PPL, we find that it can autonomously optimize and execute for two to three days.
Thus, from my perspective, self-evolution is the only place that can “create new things.” It does not replace our existing productivity but explores areas that have yet to be discovered, like top scientists. A year ago, I would have thought this timeline would extend to three to five years, but recently I feel it should indeed be shortened to one to two years. We may soon be able to leverage large models combined with a powerful self-evolving Agent framework to achieve at least exponential acceleration in scientific research.
Recently, I have noticed that the workflow of my colleagues in large model research has become highly uncertain and highly creative, but with the assistance of Claude Code and top models, our research efficiency has improved nearly tenfold. I am very much looking forward to this paradigm spreading to broader disciplines and fields, so I find “self-evolution” extremely important.
Xia Lixue: My keyword is “sustainable tokens.” I see that the development of AI is still in a long-term, continuous process, and we hope it has lasting vitality. From the perspective of infrastructure, a significant issue is that resources are ultimately limited.
Just like the discussions around sustainable development in the past, as a token factory, can we continuously, stably, and on a large scale provide tokens that enable top models to truly serve more downstream applications? This is a very important question we see.
We need to broaden our perspective to encompass the entire ecosystem—from energy to computing power, and finally to tokens, resulting in a sustainable economic iteration. We not only need to utilize various domestic computing powers but also export these capabilities overseas, enabling the global resources to be interconnected and integrated.
I also think that “sustainability” is essentially about establishing a token economy with Chinese characteristics. In the past, we talked about Made in China, turning China’s low-cost manufacturing capabilities into quality goods that are exported globally.
Now what we need to do is “AI Made in China”—transforming China’s advantages in energy and other aspects into quality tokens through a sustainable token factory, exporting them globally to become the world’s token factory. This is what I hope to see this year, the value that China brings to the world through artificial intelligence.
Zhang Peng: I will keep it brief. While everyone is gazing at the stars, I will be more grounded. My keyword is “computing power.”
As mentioned earlier, all the technologies and intelligent agent frameworks have enhanced everyone’s creativity and efficiency by tenfold, but the premise is that everyone must be able to truly utilize them. You cannot pose a question and make it think for a long time without providing an answer—that is not acceptable. Because of this, many research advancements and many things we want to do will be hindered.
A few years ago, I remember an academician said at the Zhongguancun Forum, “No card, no feelings; talking about cards hurts feelings.” I feel we have reached this point again, but the situation is different now. We have entered the reasoning phase, and demand is genuinely exploding—growing tenfold, a hundredfold. As you mentioned, usage has increased tenfold, but the actual demand may be a hundredfold, and there is still a large amount of unmet demand. What should we do? Perhaps we should all think about solutions together.