Andrew Homan is a Managing Partner at Maverick Silicon and Chris Miller is the author of Chip War. We cover the geopolitical implications of the race for superior chip technology, how venture capital is playing the AI-driven boom in the industry, and where innovation is happening beyond just Nvidia.
Principles & Lessons:
1) Shifting Semiconductor Paradigms Necessitate Constant Reinvention. Andrew notes that every decade or so, “there seems to be what I would describe as an architectural change,” from mainframes to PCs to mobile to cloud, and now to AI. This puts enormous pressure on incumbents, as illustrated by Intel’s struggles in transitioning to new computing eras. Chris describes Intel as an example of how “it was too successful in producing chips for PCs” and ended up missing mobile and now lags in AI. Both see today’s focus on AI as the latest wave forcing existing firms and newcomers to pivot rapidly or risk irrelevance.
2) The Chip-Cloud Layer Will Likely Capture Significant Value in the AI Era. Andrew divides the AI ecosystem into a three-layer cake: “the bottom layer—chips and cloud… the middle layer would be the foundational models… and the top layer would be the apps.” He suggests the top and middle layers are “very foggy,” but “almost in any scenario,” the infrastructure side “is going to capture a significant amount of economic rent” because ever more compute is required for both training and inference. Chris adds that historically, hardware has extracted ample profits when a new computing wave requires specialized chips plus adjacent software stacks.
3) Massive Capital Outlays, but Not All Are Sustainable. Andrew contrasts the “tsunami of AI build-outs” with the 1990s telco bubble. Then, huge sums were spent “on fiber or Cisco routers,” but “utilization was often 2%.” Today, hyperscalers’ capex is “50% of their operating cash flow,” so they remain self-funding with high resource utilization. However, frontier foundation model companies may find it hard to sustain “$100 million, $1 billion, or $10 billion training bills.” Chris emphasizes “the next few years we have a runway… but eventually we need real business models emerging” to support that gigantic spend.
4) Competition and Moats Arise from Interplay of Silicon and Software. Chris points out that “the history of the industry is that describing it just as silicon actually misstates the source of moats.” He cites Intel’s PC era and Apple’s mobile era, where proprietary chip design plus software integration lock in ecosystems. Andrew thinks that in AI hardware, “the performance improvements for GPUs are much faster than the improvements in networking or memory,” creating new choke points. Those who solve these bottlenecks—“advanced packaging, memory bandwidth, or interconnect”—can build the next wave of formidable businesses.
5) Political Constraints and Export Controls Are Reshaping Global Supply Chains. Chris sees the U.S. government aiming to “mitigate chip-making concentration around Taiwan,” “keep U.S. firms out in front technologically,” and “prevent adversaries—China above all—from accessing cutting-edge AI chips.” Regulations such as CHIPS Act subsidies for domestic manufacturing or export bans on high-end GPUs to China reflect those aims. Andrew observes that “mergers can stall just because Chinese regulators hold them up” for political reasons rather than genuine antitrust review. Thus, “the state of U.S.-China relations” directly affects major semiconductor deals.
6) Manufacturing in the U.S. Faces High Barriers Due to Taiwan’s Deep Ecosystem. Intel’s attempt to pivot from design to foundry underscores how hard it is to replicate Taiwan’s success. Chris stresses that Taiwan’s advantages are “the chemical suppliers, materials producers… all right there in a very small country,” plus TSMC’s single-minded focus. While the CHIPS Act supports new U.S. fabs, a single TSMC annual capex “is almost as large as the entire CHIPS subsidy,” so fully moving chip manufacturing onshore remains challenging, especially given the specialized knowledge concentrated in Asia.
7) Nimble Cloud Startups Complement (and Challenge) Incumbent Hyperscalers. Andrew highlights “the rise of the startup GPU cloud,” such as CoreWeave, which got H100 chips “six months earlier” than major clouds, offering specialized high-performance compute. These smaller clouds move quickly because “the data center for AI is finicky,” requiring custom configurations and immediate availability of the latest chips. That speed and specialization let them peel off business from Amazon or Microsoft, even though the hyperscalers eventually catch up. Both see a future where more edge or niche compute providers thrive.
8) Upfront Risk But Potential Large Outcomes for Semiconductor Startups. Founders in semis “tend to be much more experienced” with deep engineering backgrounds, says Andrew, since “the complexities are enormous.” They also face heavy early spend—tape-outs, IP licensing, EDA tools—yet if successful, exit options are many. “One or two might go IPO,” but Andrew forecasts “the majority will end up getting acquired by large established semi companies,” which have substantial cash flows and often need specialized IP or design teams. In a world where the hardware-software boundary is pivotal, well-funded incumbents may pay billions for breakthrough chip tech that helps them keep pace with AI’s relentless demands.
Transcript