Advanced Micro Devices
Rating
Hold
Hold for Long-Term Compounding
Combined average of Moat (AI Resilience), Growth, and Valuation scores.
Moat Score
Strong positioning as the primary x86 alternative and growing software (ROCm) ecosystem.
AMD's advantage lies in Architectural Efficiency:
- Chiplet Innovation: AMD led the transition to chiplets, allowing for higher yields and more flexible SKU creation compared to monolithic designs.
- x86 Market Share Capture: Continues to erode Intel's dominance in the server (EPYC) and consumer (Ryzen) markets.
- Open Ecosystem: ROCm software suite is becoming a viable open-source alternative to Nvidia's proprietary CUDA, attracting hyperscalers looking for vendor flexibility.
Ten Moats Verdict
AMD's moat is weakening in AI-resilient categories vs. NVIDIA. Execution quality is world-class, but CUDA's network effect remains the dominant barrier — AMD's upside is a CUDA challenger, not a CUDA replacer.
GPU programming interfaces are commoditized; ROCm still significantly lags CUDA in developer tooling.
Not applicable as a primary competitive moat for semiconductor design.
Not applicable to AMD's competitive position.
Chip design engineers remain scarce; Lisa Su's executive team is world-class and hard to replicate.
AMD sells chips, not full-stack AI infrastructure solutions — limited bundling moat vs. NVIDIA's CUDA ecosystem.
CDNA architecture IP, ROCm optimization data, and foundry partnership integration data remain proprietary.
Limited government contract footprint compared to NVIDIA; some defense wins but not entrenched.
ROCm developer ecosystem is a fraction of CUDA's 4M+ developer community — the critical gap to close.
Embedded in hyperscaler data centers as a second-source alternative, with growing MI300X adoption at Microsoft.
Not yet the default standard; AMD operates in NVIDIA's shadow in AI compute despite superior price/performance in some workloads.
Combined average of Moat (AI Resilience), Growth, and Valuation scores.
Moat Score
Strong positioning as the primary x86 alternative and growing software (ROCm) ecosystem.
Growth Score
Q1 2026 EPS $1.37 beat consensus by ~10% (+43% YoY); the OpenAI 6 GW compute deal, multi-year Meta partnership, and the MI450/Helios ramp into 2H 2026 are driving consensus upgrades and a 60%+ YTD stock rally.
Valuation Score
At ~$356, AMD trades just below the revised base ($390) on the back of the OpenAI 6 GW commitment and Meta multi-year deal — fair-to-modestly-rich entry, with risk/reward improving if 2H 2026 MI450/Helios execution lands as guided.
The Chiplet Moat
AMD's advantage lies in Architectural Efficiency:
- Chiplet Innovation: AMD led the transition to chiplets, allowing for higher yields and more flexible SKU creation compared to monolithic designs.
- x86 Market Share Capture: Continues to erode Intel's dominance in the server (EPYC) and consumer (Ryzen) markets.
- Open Ecosystem: ROCm software suite is becoming a viable open-source alternative to Nvidia's proprietary CUDA, attracting hyperscalers looking for vendor flexibility.
Ten Moats Verdict
AMD's moat is weakening in AI-resilient categories vs. NVIDIA. Execution quality is world-class, but CUDA's network effect remains the dominant barrier — AMD's upside is a CUDA challenger, not a CUDA replacer.
GPU programming interfaces are commoditized; ROCm still significantly lags CUDA in developer tooling.
Not applicable as a primary competitive moat for semiconductor design.
Not applicable to AMD's competitive position.
Chip design engineers remain scarce; Lisa Su's executive team is world-class and hard to replicate.
AMD sells chips, not full-stack AI infrastructure solutions — limited bundling moat vs. NVIDIA's CUDA ecosystem.
CDNA architecture IP, ROCm optimization data, and foundry partnership integration data remain proprietary.
Limited government contract footprint compared to NVIDIA; some defense wins but not entrenched.
ROCm developer ecosystem is a fraction of CUDA's 4M+ developer community — the critical gap to close.
Embedded in hyperscaler data centers as a second-source alternative, with growing MI300X adoption at Microsoft.
Not yet the default standard; AMD operates in NVIDIA's shadow in AI compute despite superior price/performance in some workloads.
Growth Analysis
Growth Drivers
Key Risk
If NVIDIA's Rubin Ultra widens the performance gap on flagship training workloads through 2027 and ROCm fails to close the developer-ecosystem gap to CUDA, AMD's AI GPU share stalls below 15% of TAM and the 28-38% CAGR collapses toward 15-20% — invalidating the consensus upgrade cycle priced in after the OpenAI deal.
Score Derivation
Base 88 (28-38% CAGR mid-band) + 6 Q2 guide implying +46% YoY + 5 Data Center +57% YoY and OpenAI/Meta deals + 3 expanding gross margin - 9 NVIDIA Rubin Ultra competitive overhang = 93
Price Scenarios (12–24 Months)
Where We Are vs Targets
Loading live price…
AI capex moderates in 2027, NVIDIA reasserts dominance with Rubin Ultra, and the OpenAI/Meta deals scale back materially.
- Hyperscaler AI capex growth decelerates from 40%+ to mid-teens as training-cluster overbuild concerns surface
- Rubin Ultra performance leadership widens vs. MI450, capping AMD's share of new AI GPU spend at <15%
- Intel 18A regains incremental server CPU share, slowing EPYC's compounding
MI450/Helios ramps to volume in 2H 2026, OpenAI 6 GW deal deploys on schedule, and AMD compounds AI GPU share toward 20% of TAM.
- AI GPU revenue reaches $25–30B in FY2026 as OpenAI, Meta, and Microsoft scale MI450 deployments
- EPYC server share crosses 35%, with hyperscaler vendor diversification accelerating
- Operating margin expands to 30%+ as AI mix and software (ROCm) adoption improves pricing
AMD becomes a credible second-source AI standard, capturing 25%+ of AI GPU spend as ROCm reaches functional parity with CUDA for inference and key training workloads.
- MI500 roadmap (2027) achieves performance parity with NVIDIA on flagship training workloads
- OpenAI 6 GW commitment scales to 10 GW with multi-year extension, and a second hyperscaler signs a similar deal
- ROCm crosses 1M+ developers and becomes the default open alternative for inference workloads