Right now, AI is burning cash faster than a teenage girl at Sephora. Training large models costs hundreds of millions of dollars, and investors are starting to ask: Where’s the return?
We all know the DeepSeek story—the Chinese AI startup that’s undercutting the giants with a low-cost, partially open-source model. It’s putting pressure on OpenAI, Google, and Microsoft, all of whom have spent billions locking AI behind proprietary walls. The question is: Who actually captures value in this new landscape?
📌 The Strategic Lens:
There are two ways to look at this:
1️⃣ Does long-term value lie in infrastructure or foundational models? If AI remains compute-heavy, Nvidia, AWS, and the biggest LLM labs will continue to dominate. But cheaper, distilled models like DeepSeek could shrink demand for hyper-expensive compute, shifting value elsewhere.
2️⃣ Or do the real winners emerge at the orchestration layer? Right now, the “AI wrappers”—companies using existing models (GPT-4, Claude, Gemini) to build specialized applications may be moving into the frothy territory. Though historically, platforms that simplify and distribute technology (think: Windows, iOS, AWS) capture more value than raw infrastructure.
Which raises the trillion-dollar question: Are OpenAI, Google, and Meta overestimating the moat of large-scale AI models? Or will the real economic power belong to the companies that make AI usable, personalized, and indispensable across industries?
Flipping the Narrative
This week, Nvidia CEO Jensen Huang publicly responded to the $600 billion market wipeout caused by DeepSeek’s announcement of its low-cost, open-source AI model. Investors saw DeepSeek as a threat to Nvidia’s dominance, sparking a sell-off that knocked nearly 20% off Huang’s personal net worth.
But Huang isn’t buying the panic. In a pre-recorded interview, he dismissed the idea that DeepSeek’s R1 model diminishes demand for Nvidia’s high-performance computing power. Instead, he flipped the narrative, arguing that AI breakthroughs like DeepSeek’s only increase the need for powerful compute infrastructure.
His key argument? Post-training—the process of refining an AI model after its initial training—demands immense computing power. While DeepSeek’s model is innovative, Huang emphasized that the real intelligence of AI is developed in post-training—a stage he believes will still rely on Nvidia’s chips.
In other words, even if models get cheaper to train, refining and optimizing them will continue to be computationally expensive—which keeps Nvidia at the center of the AI economy.
Huang also welcomed DeepSeek’s contributions, arguing that open-source innovation fuels AI adoption—and, by extension, Nvidia’s long-term relevance. His comments were a direct play to restore investor confidence ahead of Nvidia’s earnings call on February 26.
🎤 Conversation starter
Are today’s AI giants destined to stay on top, or will the real winners be the orchestration layers that make AI accessible to the masses?