【AI + Hardware】"Lobster" OpenClaw Triggers Shift in Hardware Demand, Will Memory Prices Continue to Rise? Morgan Stanley: Execution Requires More DRAM Than Thinking

robot
Abstract generation in progress

Recently, OpenClaw has sparked a “lobster farming” craze. Morgan Stanley pointed out that AI agents represented by OpenClaw are driving changes in hardware demand. The AI bottleneck has shifted from computing power to data processing, requiring more DRAM (Dynamic Random Access Memory) to perform tasks rather than just thinking, leading to tighter DRAM supply.

The firm raised SK Hynix’s target price to 1.3 million Korean won and Samsung Electronics’ common stock target price to 251,000 Korean won, both maintaining an “overweight” rating.

The report predicts that memory prices will accelerate year-over-year, currently in the mid-phase of an upward trend. Specifically, by Q2 2026, the price of high-end DRAM DDR5 used for advanced computing is expected to surge over 50% quarter-over-quarter, while more widely used DDR4 is projected to increase by 30% to 40%. NAND eSSD products for servers could potentially double in price.

Hardware Bottleneck Shift and Tight DRAM Demand in AI “Autonomous Execution” Mode

Unlike generative AI like ChatGPT, which answers questions one at a time, OpenClaw functions more like an efficient team of assistants. It autonomously searches web information, calls external software tools, reads and analyzes documents, and even executes code, ultimately deriving complex outputs.

Morgan Stanley believes that multi-step coordination, tool invocation, and process orchestration shift the hardware bottleneck from GPUs (Graphics Processing Units) to CPUs (Central Processing Units) and memory. CPU computation times will slow down overall task execution. Additionally, multiple agents need to continuously share context, unload KV caches (Key-Value Cache), and store and retrieve intermediate results, heavily consuming DRAM space.

In traditional large language models (LLMs), GPU computing power was seen as the decisive bottleneck. CPUs only needed to convert tokens (a general term for AI computational resources or billing units) into text, and DRAM was mainly used for cache read/write tasks.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments