AVGO didn't just participate in the AI boom. It locked in the infrastructure layer before most investors knew there was one. The market tells one AI story: Nvidia. But the hyperscalers building AI at scale are writing a different chapter. Google, Meta, ByteDance, Anthropic, and OpenAI have committed $162 billion in orders to a company most investors still call a dividend stock. The common assumption was that AI chips were the same as GPUs. The reality is that inference workloads, projected to be 70% of AI compute by 2027, run better on custom silicon. And Broadcom designs that silicon.
Why are hyperscalers building custom AI chips instead of buying Nvidia GPUs?
Training and inference are different workloads with different economics. Training a large language model requires brute-force compute. GPUs excel here. But once the model is trained, inference, the process of running the model billions of times per day, has different requirements. Inference needs efficiency, not raw power. Custom XPU chips designed for specific workloads deliver superior performance per watt. At hyperscaler scale, that efficiency translates to billions in savings. Google's TPU, Meta's custom accelerators, and ByteDance's inference chips all run on Broadcom silicon. The hyperscalers aren't experimenting. They're deploying clusters of one million chips each.
What does a $162 billion backlog actually mean for AVGO stock?
A backlog is not a forecast. It's a purchase order with a delivery schedule. Broadcom's $162 billion consolidated backlog includes $73 billion in AI-related orders scheduled for delivery over the next 18 months. These aren't projections. They're contracts. The AI portion alone covers custom XPU chips, networking switches, DSPs, lasers, and PCIe components. Hyperscalers can't build data centers without these parts. Investors often read backlogs as inventory risk. In infrastructure, a backlog from customers with unlimited capital is the opposite of risk. It's a forward revenue schedule.
Is concentration in 5 customers a risk for Broadcom?
Concentration with the winners is the opposite of risk. Broadcom's 5 custom chip customers, Google, Meta, ByteDance, Anthropic, and OpenAI, control the majority of global AI compute spend. These are not fragile startups. They are the infrastructure layer of the internet. Anthropic alone placed a $10 billion order for Google's TPU Ironwood racks, designed and distributed by Broadcom. A fifth unnamed customer added $1 billion in Q4 2025. Concentration risk analysis assumes customers can leave. When switching costs are measured in years of chip redesign, concentration becomes moat.
How does Broadcom's AI revenue growth compare to Nvidia?
Broadcom's AI business is growing much faster than most investors realize. In fiscal 2024, AI revenue more than tripled, rising from $4 billion to over $12 billion. Goldman Sachs expects AI capital expenditure to surge past $500 billion annually by 2026, with cumulative infrastructure investment on track to exceed $1 trillion by 2027 as the industry pivots from building hardware to scaling the platform and application layers. The market still prices AVGO as a diversified semiconductor company. But AI now represents over 50% of semiconductor revenue, and the trajectory is steeper than headlines suggest. Nvidia dominates the conversation. Broadcom dominates the contracts. Both can win simultaneously because they serve different parts of the AI stack.
The loudest AI narratives capture attention. The infrastructure contracts capture value. Every generation of technology follows the same pattern: the picks-and-shovels providers often outperform the gold miners. Understanding where durable margins live, in custom silicon designed for scale, reveals how capital actually flows through transformative technology. The future of AI is not louder models. It is infrastructure that makes intelligence deployable at scale.