š° The AI Gold Rush
Where does the real value lie?
Right now, AI is burning cash faster than a teenage girl at Sephora. Training large models costs hundreds of millions of dollars, and investors are starting to ask: Whereās the return?
We all know the DeepSeek storyāthe Chinese AI startup thatās undercutting the giants with a low-cost, partially open-source model. Itās putting pressure on OpenAI, Google, and Microsoft, all of whom have spent billions locking AI behind proprietary walls. The question is: Who actually captures value in this new landscape?
š The Strategic Lens:
There are two ways to look at this:
1ļøā£ Does long-term value lie in infrastructure or foundational models? If AI remains compute-heavy, Nvidia, AWS, and the biggest LLM labs will continue to dominate. But cheaper, distilled models like DeepSeek could shrink demand for hyper-expensive compute, shifting value elsewhere.
2ļøā£ Or do the real winners emerge at the orchestration layer? Right now, the āAI wrappersāācompanies using existing models (GPT-4, Claude, Gemini) to build specialized applications may be moving into the frothy territory. Though historically, platforms that simplify and distribute technology (think: Windows, iOS, AWS) capture more value than raw infrastructure.
Which raises the trillion-dollar question: Are OpenAI, Google, and Meta overestimating the moat of large-scale AI models? Or will the real economic power belong to the companies that make AI usable, personalized, and indispensable across industries?
Flipping the Narrative
This week, Nvidia CEO Jensen Huang publicly responded to the $600 billion market wipeout caused by DeepSeekās announcement of its low-cost, open-source AI model. Investors saw DeepSeek as a threat to Nvidiaās dominance, sparking a sell-off that knocked nearly 20% off Huangās personal net worth.
But Huang isnāt buying the panic. In a pre-recorded interview, he dismissed the idea that DeepSeekās R1 model diminishes demand for Nvidiaās high-performance computing power. Instead, he flipped the narrative, arguing that AI breakthroughs like DeepSeekās only increase the need for powerful compute infrastructure.
His key argument? Post-trainingāthe process of refining an AI model after its initial trainingādemands immense computing power. While DeepSeekās model is innovative, Huang emphasized that the real intelligence of AI is developed in post-trainingāa stage he believes will still rely on Nvidiaās chips.
In other words, even if models get cheaper to train, refining and optimizing them will continue to be computationally expensiveāwhich keeps Nvidia at the center of the AI economy.
Huang also welcomed DeepSeekās contributions, arguing that open-source innovation fuels AI adoptionāand, by extension, Nvidiaās long-term relevance. His comments were a direct play to restore investor confidence ahead of Nvidiaās earnings call on February 26.
š¤ Conversation starter
Are todayās AI giants destined to stay on top, or will the real winners be the orchestration layers that make AI accessible to the masses?


