The Big Tech Capex Cycle:
$780B Spending Surge & Infrastructure Gold Rush

Microsoft, Google, Amazon, Meta collectively spending $780B (2024-2027) on AI infrastructure—3x their historical capex averages and largest tech buildout in history. Microsoft alone $240B across three years. Google $225B. Amazon AWS $195B. Meta $120B. Every dollar flowing to: Nvidia GPUs (88% market share monopoly), Vertiv cooling systems (30% market share), Arista cloud networking (45% hyperscale), Broadcom custom AI chips—the Bro Billionaire infrastructure stocks positioned as pickaxe-and-shovel plays on AI gold rush. But sustainability questions emerging: is this rational investment or bubble? When does capex peak? Who wins when spending moderates?

$780B
Big Tech Capex (2024-2027)
3x
Historical Capex Averages
260B
Annual Spending Rate (2026)
📅 Updated Feb 8, 2026

Main points

  • Historic Surge: Big Tech (Microsoft, Google, Amazon, Meta) spending $260B annually (2024-2027) vs $100B historically—3x increase. Total $780B over 3 years = larger than GDP of Switzerland.
  • AI Driving Everything: 80%+ of new capex = AI infrastructure (GPUs, data centers, cooling, networking). Training ChatGPT-5 scale models requires $5-10B per facility. Cloud AI services (Azure OpenAI, AWS Bedrock) driving enterprise adoption.
  • Competitive Arms Race: Winner-take-most dynamics—hyperscaler lagging in AI loses cloud customers permanently. Microsoft integrated OpenAI first, capturing enterprise AI market. Google, Amazon, Meta forced to accelerate spending or face irrelevance.
  • Supply Chain Bottlenecks: Nvidia H100/H200 GPUs on 12-18 month allocation. Hyperscalers hoarding supply—ordering 100,000+ GPUs annually vs competitors' 10,000-20,000. Creating structural demand for infrastructure (cooling, networking, power).
  • Infrastructure Beneficiaries: Nvidia (NVDA - GPUs), Broadcom (AVGO - custom chips), Vertiv (VRT - cooling), Arista Networks (ANET - networking), TSMC (TSM - chip fabrication), Digital Realty (DLR - data center REITs).
  • Sustainability Risks: Financial ROI unclear—spending $260B but AI revenue only $50B across industry. Historical precedent: infrastructure booms (fiber 2000, cleantech 2008) ended in busts when monetization disappointed. Capex likely peaks 2026-2027, moderates 2028-2029.

The $780B Surge: Breaking Down Big Tech Capex by Company

In Q3 2024 earnings calls, every Big Tech CFO announced unprecedented capex guidance. Combined total: $260B annually through 2027—3x their historical $80-100B baseline. This is corporate infrastructure spending at nation-state scale.

Microsoft: $240B (2024-2027)

Annual capex: $80B (2024-2026E). Up from $28B (2020), $38B (2022), $50B (2023).

What they're building:

  • Azure AI cloud: Infrastructure for OpenAI ChatGPT integration (Office 365 Copilot, Bing Chat, Azure OpenAI Service). Enterprise customers migrating AI workloads to Azure = Azure growing 35%+ vs AWS 13%
  • OpenAI exclusive compute: Microsoft provides 100% of OpenAI's computing infrastructure (multi-year agreement). GPT-4, GPT-5 training runs exclusively on Azure = strategic moat
  • Geographic expansion: 60+ Azure regions globally (vs 33 AWS regions). Data residency requirements forcing local data center buildout (EU GDPR, China data laws)
  • Gaming (Xbox Cloud): Streaming AAA titles requires massive GPU infrastructure. Microsoft building largest cloud gaming network globally

GPU scale: Microsoft rumored 200,000-300,000 Nvidia H100/H200 GPUs deployed (2024-2025), largest fleet globally besides Google. Planning 500,000+ B200 GPUs 2026-2027.

Why so aggressive: Azure AI winning enterprise—Copilot generating $10B+ annual revenue (2026E) from Fortune 500 adoptions. ROI clear = justifying capex surge.

Google: $225B (2024-2027)

Annual capex: $75B (2024-2026E). Up from $26B (2021), $32B (2022), $42B (2023).

What they're building:

  • Gemini AI infrastructure: Training Gemini Ultra (competing with GPT-4) requires 100,000+ GPU clusters. Gemini integrated into Search, Gmail, Docs, YouTube = 3 billion users accessing AI
  • TPU (Tensor Processing Units): Google's custom AI chips. TPU v5 chips competing with Nvidia H100. Building dedicated TPU data centers = reducing Nvidia dependency (but still buying H100s for flexibility)
  • YouTube AI infrastructure: AI video recommendations, content moderation, ad targeting. Processing 500+ hours video uploaded per minute = enormous compute
  • Google Cloud AI: Vertex AI platform for enterprises. Competing with Azure OpenAI, AWS Bedrock. Cloud revenue $100B+ annually (2026E)—AI services fastest growing segment

GPU scale: 150,000-250,000 Nvidia GPUs + 200,000+ custom TPUs = largest hybrid GPU/TPU deployment globally.

Why so aggressive: Defending Search monopoly (Microsoft Bing + ChatGPT threatening Google's $200B+ Search advertising revenue). Can't afford to lose AI race = existential capex.

Amazon (AWS): $195B (2024-2027)

Annual capex: $65B (2024-2026E). Up from $59B (2022), $48B (2021), $40B (2020).

What they're building:

  • AWS Bedrock: Managed AI service offering OpenAI, Anthropic Claude, Meta Llama models via API. Enterprise customers building AI apps on AWS = $35B annual AI revenue (2026E)
  • Trainium/Inferentia chips: Amazon's custom AI training/inference chips. Cheaper than Nvidia GPUs for specific workloads. Building Trainium-optimized data centers = differentiation vs Azure/Google Cloud
  • Retail AI: Amazon.com recommendations, Alexa voice AI, warehouse robotics, delivery route optimization. AI spending = operational efficiency (reduces costs long-term)
  • Geographic expansion: AWS expanding into India, Southeast Asia, Middle East (47 availability zones across 30+ countries). Data sovereignty laws requiring local infrastructure

GPU scale: 100,000-150,000 Nvidia GPUs + 150,000+ Trainium/Inferentia custom chips = largest custom chip deployment attempting to reduce Nvidia dependency.

Why so aggressive: AWS market share declining (31% → 29% as Azure gains). Must defend cloud leadership = capex arms race. AI services = margin expansion (99% gross margins vs 50% traditional cloud).

Meta: $120B (2024-2027)

Annual capex: $40B (2024-2026E). Up from $32B (2023), $30B (2022), $19B (2021).

What they're building:

  • Llama AI models: Meta's open-source AI (Llama 3, Llama 4). Training requires 50,000-100,000 GPUs. Strategic: open-sourcing creates developer ecosystem dependent on Meta infrastructure
  • AI-powered advertising: Facebook/Instagram ad targeting using AI. Revenue impact: $10-15B additional annual ad revenue from improved targeting (justifying capex)
  • Metaverse compute: Reality Labs (VR/AR). Quest headsets require massive rendering infrastructure for cloud-based metaverse experiences
  • Content moderation AI: Scanning 3 billion users' posts/images/videos for policy violations. AI replacing human moderators = cost reduction

GPU scale: 100,000-150,000 Nvidia H100 GPUs (2024-2025). Planning 200,000+ B200 GPUs 2026-2027 = second-largest GPU fleet after Microsoft/Google.

Why so aggressive: AI advertising ROI proven ($15B incremental revenue). Llama open-source strategy positioning Meta as AI infrastructure provider (long-term cloud business potential).

Company 2024-2027 Capex Historical Avg Increase Primary Use
Microsoft $240B $80B 3.0x Azure AI, OpenAI infrastructure, Copilot
Google $225B $75B 3.0x Gemini AI, TPUs, YouTube AI, Cloud
Amazon (AWS) $195B $70B 2.8x AWS Bedrock, Trainium chips, retail AI
Meta $120B $40B 3.0x Llama AI, advertising AI, metaverse
TOTAL $780B $265B 2.9x

$780B over 3 years = larger than Switzerland's GDP ($800B annually). This is infrastructure spending at nation-state scale—building digital infrastructure equivalent to building interstate highway system, power grid, telecom network simultaneously. Biggest beneficiaries: companies supplying GPUs (Nvidia), cooling (Vertiv), networking (Arista), chips (Broadcom, TSMC), real estate (Digital Realty).

Contrarian Take

Most analysts focus on Nvidia's GPU dominance, but they're missing the real story: their software moat through CUDA. Competitors can match chip performance, but can't replicate a decade of developer ecosystem investment.

The Infrastructure Beneficiaries: Who Captures the $780B?

Every dollar of hyperscaler capex flows to infrastructure suppliers. Here's the breakdown of how $780B gets distributed across supply chain:

GPU/Chip Suppliers ($350B = 45% of capex)

Nvidia (NVDA): Dominant beneficiary. 88% GPU market share = 90%+ of AI chip spending.

  • Revenue impact: $120B annually from hyperscalers (2026E). Up from $50B (2024)
  • Products: H100 GPUs ($25K-40K each), H200 (improved memory), B200 (next-gen, 2026 ramp), Grace CPUs
  • Backlog: 12-18 month lead times. Hyperscalers ordering 100,000+ GPUs annually—$3-5B single orders

Broadcom (AVGO): Custom AI chips for hyperscalers. TPUs for Google, Trainium for AWS, custom ASICs for Meta.

  • Revenue impact: $15B annually AI custom silicon (2026E). Up from $5B (2023)
  • Moat: Only company with design expertise, ASIC manufacturing partnerships (TSMC), and hyperscaler relationships. Competitors (Intel, AMD) years behind

TSMC (TSM): Manufactures Nvidia H100/B200, Broadcom ASICs, Google TPUs, AMD GPUs. 90%+ of AI chips fabricated at TSMC fabs.

  • Revenue impact: $40B annually AI chip fabrication (2026E). Up from $20B (2024)
  • Critical bottleneck: TSMC 4nm/3nm capacity constrained. Building new fabs (Arizona $40B investment) but won't add capacity until 2027-2028

Cooling/Power Infrastructure ($200B = 26% of capex)

Vertiv (VRT): Liquid cooling systems, UPS (backup power), power distribution units (PDUs), monitoring software.

  • Revenue impact: $12B annually from hyperscalers (2026E). Up from $5B (2023)
  • Content per rack: AI rack = $150K-250K Vertiv infrastructure (cooling + power + monitoring) vs $30K traditional

Schneider Electric: Electrical infrastructure, cooling systems, power management software. Competing with Vertiv for data center contracts.

Eaton (ETN): UPS systems, electrical switchgear, backup generators. Diversified industrial—data center = 20% revenue growing 30%+ annually.

Networking Equipment ($150B = 19% of capex)

Arista Networks (ANET): Cloud Ethernet switches. 45% hyperscale market share. AI clusters require 400Gbps/800Gbps switches (vs 100Gbps traditional).

  • Revenue impact: $10B annually from hyperscalers (2026E). Up from $4B (2023)
  • Content per cluster: 100,000-GPU cluster = $500M+ Arista networking equipment (switches, routers, management software)

Broadcom (AVGO): Networking ASICs (chips inside Arista switches), custom AI interconnects. Dual revenue stream (custom AI chips + networking silicon).

Real Estate / Construction ($80B = 10% of capex)

Digital Realty (DLR), Equinix (EQIX): Data center REITs leasing facilities to hyperscalers. Hyperscalers prefer lease vs own (capital efficiency).

  • Revenue impact: $10-15B annual rent from hyperscaler leases (long-term 10-15 year contracts)
  • Construction boom: Building 1GW+ data center campuses. Single facility = $5-10B construction cost

The Best Bro Billionaire Plays on Big Tech Capex Cycle

1

Nvidia

NVDA
Market Cap
$3.3T
Revenue (2026E)
$160B
GPU Market Share
88%
Hyperscaler Revenue
$120B (75% of total)

The Primary Beneficiary. Nvidia captures 45% of Big Tech's $780B capex surge. H100/H200/B200 GPUs powering every AI data center globally. 88% GPU market share = monopoly pricing power. Hyperscalers (Microsoft, Google, Amazon, Meta) = 75% of Nvidia revenue ($120B annually 2026E). Backlog extends through 2027—GPUs on 12-18 month allocation.

Why #1: Most direct Big Tech capex exposure. Every $1B hyperscaler spends on AI infrastructure = $450M flows to Nvidia (GPUs dominate capex mix). Revenue $160B (2026E) up from $60B (2024) = explosive growth continuing. Operating leverage—gross margins 75%+ (monopoly pricing), FCF $80B+ annually funding buybacks/dividends. Risks priced in but upside remains if capex cycle extends through 2028-2029.

Risks: Hyperscaler custom chips (Google TPU, Amazon Trainium) reducing Nvidia dependency long-term, AMD competition (MI300X GPUs gaining share at margin), capex cycle peak 2026-2027 (demand moderates 2028), valuation (45x forward earnings—priced for perfection), geopolitical (China export restrictions limit TAM). But near-term (2026-2027) = clear earnings visibility.

EXTREME CONVICTION — 15-20% PORTFOLIO
2

Broadcom

AVGO
Market Cap
$850B
AI Revenue (2026E)
$20B
Custom Silicon
Google TPU, AWS Trainium, Meta
Networking ASICs
80% market share

The Custom Chip Monopoly. Broadcom designs custom AI chips for hyperscalers attempting to reduce Nvidia dependency: Google TPUs, Amazon Trainium/Inferentia, Facebook custom ASICs. Also supplies networking ASICs (chips inside Arista/Cisco switches) = dual AI exposure. Only company with design expertise, TSMC partnerships, and hyperscaler relationships at scale.

Why #2: Diversified AI exposure (custom training chips + networking silicon + AI infrastructure software). Revenue $20B AI (2026E) up from $8B (2024). Custom chips = long-term recurring revenue (hyperscalers locked in for 5-10 year roadmaps). VMware acquisition adds $15B software revenue (AI infrastructure management). Cheapervaluation than Nvidia (28x vs 45x forward earnings) despite similar growth.

Risks: Hyperscaler concentration (top 5 customers = 60% revenue), custom chip margins lower than Nvidia GPUs (30-40% vs 75%+), VMware integration execution risk ($69B acquisition completed 2023), competition (Intel Gaudi, AMD custom chips attempting to compete). But custom silicon moat defensible—switching costs enormous once hyperscaler builds around Broadcom architecture.

EXTREME CONVICTION — 12-18% PORTFOLIO
3

Vertiv

VRT
Market Cap
$38B
Revenue (2026E)
$10B
Backlog
$8.5B
Market Share
30% (cooling/power)

The Infrastructure Picks-and-Shovels Play. Vertiv supplies cooling systems, UPS (backup power), power distribution, monitoring software for AI data centers. Captures 26% of Big Tech capex (infrastructure spending). Single AI rack = $150K-250K Vertiv content vs $30K traditional—5-8x monetization per rack deployed.

Why #3: Pure-play infrastructure exposure. Liquid cooling revenue 60%+ growth (now 40% of sales). Backlog $8.5B = 12+ months revenue visibility. Operating leverage—gross margins 32% → 37%+ as liquid cooling scales (higher margin product). Free cash flow $1.5B+ (2026E) funding buybacks. Stock up 180% (2024) but growth continuing—backlog extending through 2027.

Risks: Customer concentration (top 5 = 75% revenue), cyclicality (capex boom-bust risk), supply chain (cooling components on allocation), valuation (28x earnings—expensive if capex slows). But AI buildout multi-year—infrastructure spending lags GPU purchases by 6-12 months (data centers being built now house 2027 GPUs).

HIGH CONVICTION — 8-12% PORTFOLIO
4

Arista Networks

ANET
Market Cap
$115B
Revenue (2026E)
$9B
Hyperscale Share
45%
Gross Margin
63%

The Cloud Networking Monopoly. Arista supplies Ethernet switches connecting servers in hyperscale data centers. 45% cloud networking market share. AI driving 400Gbps/800Gbps switch upgrades—Arista content per AI rack $50K-80K vs $10K traditional. Meta's 100,000-GPU cluster = $500M+ Arista equipment alone.

Why #4: AI clusters require 10x networking bandwidth. Arista dominates high-speed networking (EOS software ecosystem creates lock-in). Revenue growing 35%+ through 2027. Software-driven margins (63% gross margin vs Cisco 55%). Microsoft + Meta = 50% revenue but diversifying (Azure, AWS expanding Arista deployments).

Risks: Customer concentration (Microsoft + Meta = 50%), competition (Broadcom custom switches, Cisco fighting back, hyperscaler proprietary networking), cyclicality (cloud capex boom-bust), valuation (32x earnings—priced for growth). But AI networking multi-decade upgrade—not 1-2 year refresh.

HIGH CONVICTION — 8-12% PORTFOLIO
5

TSMC

TSM
Market Cap
$850B
AI Revenue (2026E)
$45B
Foundry Market Share
60%
Customers
Nvidia, AMD, Broadcom, Apple

The Chip Fabrication Bottleneck. TSMC manufactures 90%+ of AI chips: Nvidia H100/B200, AMD MI300X, Broadcom ASICs, Apple custom silicon. 60% global foundry market share. Only manufacturer with 4nm/3nm advanced nodes at scale. Critical bottleneck—TSMC capacity constraints limit Nvidia GPU supply.

Why #5: Diversified AI exposure (Nvidia + AMD + Broadcom + Apple = 60% revenue). Revenue $45B AI chips (2026E) up from $25B (2024). Capacity expansion ($40B Arizona fabs 2027-2028) extends growth runway. Cheaper valuation than Nvidia (22x vs 45x) with similar AI exposure but lower risk (diversified customers vs Nvidia's hyperscaler concentration).

Risks: Geopolitical (Taiwan-China tensions = supply chain risk), competition (Intel/Samsung attempting foundry business—years behind but improving), capex intensity ($40B annual capex = FCF limited), cyclicality (semiconductor boom-bust historically). But AI demand structural—Taiwan produces 90% of advanced chips globally (irreplaceable near-term).

MODERATE-HIGH CONVICTION — 8-12% PORTFOLIO

The Bottom Line: Capex Cycle Multi-Year, But Peak Approaching

Big Tech capex cycle ($780B over 2024-2027) is real, unprecedented, and driving explosive growth for infrastructure suppliers: Nvidia, Broadcom, Vertiv, Arista, TSMC. Unlike historical bubbles (fiber 2000, cleantech 2008), AI monetization proving out—Azure AI $10B revenue, enterprise adoption accelerating, ROI visible at Microsoft/Google/Meta.

But sustainability questions emerging: capex likely peaks 2026-2027 as GPU supply normalizes, enterprise cloud migrations mature, CFOs demand profitability proof. Historical precedent: cloud buildout 2015-2018 corrected 30% after peaking. Infrastructure stocks (Nvidia, Vertiv, Arista) will discount capex slowdown 6-12 months early—watch for 2026 Q3-Q4 guidance cuts.

Ride capex boom through 2026, trim positions 2027 as peak signals emerge. These stocks multi-baggers from here IF capex cycle extends to 2028-2029—but risk/reward less favorable than 2023-2024 entry points. Size meaningfully but prepare for volatility.