Micron Technology
Rating
Accumulate
Adding on Dips — Active Accumulation
Combined average of Moat (AI Resilience), Growth, and Valuation scores.
Moat Score
An oligopoly of three (Samsung, SK Hynix, Micron) with high capital barriers to entry, but commodity memory pricing limits true moat durability. HBM4 — already in high-volume production a quarter ahead of plan — has shifted from differentiation in theory to validated execution, and the move from spot pricing to 3–5-year HBM supply agreements meaningfully dampens cyclicality and entrenches Micron with named hyperscalers.
Micron's competitive position rests on Oligopoly Structure, HBM4 Execution, and Multi-Year Supply Lock-In — the last two have strengthened materially this cycle:
- Three-Player Oligopoly: With Samsung, SK Hynix, and Micron controlling ~95% of DRAM supply, the market is structurally oligopolistic. New entrants face $30B+ capex requirements and decade-long learning curves that effectively preclude competition. Micron is the only US-based survivor of what was once a much larger industry.
- HBM4 in Volume — One Quarter Early: Micron's HBM4 — pin speeds above 11 Gb/s with in-house CMOS and advanced metallization on the base logic die — entered high-volume production in Q1 CY2026, a full quarter ahead of management's prior guidance. Q2 FY2026 revenue of $23.9B (+196% YoY) and Q3 guidance of ~$33.5B confirm the ramp is yielding well at scale, the riskiest unknown a quarter ago.
- From Spot Pricing to 3–5 Year Contracts: The 2026 HBM book is fully sold out and customers are now signing three- to five-year supply agreements — a structural shift from the historical quarterly negotiation pattern that permanently improves Micron's revenue visibility. Outside HBM, standard DRAM and NAND remain commodity products vulnerable to oversupply (Micron posted $5.8B in net losses in 2022–23), and the $20B+ FY2026 capex commitment still creates execution risk if AI capex normalises.
Ten Moats Verdict
Micron is a clear net beneficiary of AI — the HBM4 supercycle is directly driven by AI infrastructure build-out, talentScarcity and proprietaryData are strengthened by AI's demand for specialised chip design, and the shift to 3–5 year HBM contracts has upgraded transactionEmbedding from weakened to intact. The moat is materially better than a year ago, but Micron still does not own a software layer, a data flywheel, or a network effect that compounds independently of the hardware cycle, so durability remains tied to the HBM margin premium holding through CY2027–2028.
N/A — Micron is a B2B semiconductor manufacturer with no consumer interface lock-in.
N/A — memory chips have no embedded business-logic moat.
N/A — Micron does not derive competitive advantage from public data access.
Leading-edge DRAM and HBM process engineers (sub-1β node specialists, HBM4 base-die architects, advanced metallization specialists) are among the scarcest technical talent globally. Micron's Boise R&D center is a decade-deep talent cluster that competitors cannot quickly replicate. AI strengthens this moat — designing HBM4 base logic dies in-house requires irreplaceable human expertise.
Micron sells DRAM, NAND, and HBM as distinct products with limited bundling; some system-level memory solutions exist but don't create meaningful lock-in vs. Samsung or SK Hynix.
Proprietary DRAM cell designs (1-gamma node), HBM4 base-die CMOS architecture, advanced metallization processes, and yield-learning data from high-volume HBM production represent genuine IP. In-house logic die design (vs. competitors outsourcing) is a defensible advantage AI cannot easily replicate.
CHIPS Act $6.4B in total grants for Idaho and New York fabs makes Micron a designated US national security asset. The US government has an explicit interest in Micron's success as the only US-based DRAM manufacturer — and export controls on Samsung/SK Hynix to China further entrench Micron's strategic position.
N/A — no network effects exist in commodity memory; customers buy on price, availability, and quality specifications, not ecosystem lock-in.
HBM qualification is specific per GPU generation, and the industry has shifted from quarterly DRAM negotiations to 3–5 year HBM supply contracts with named hyperscalers — a structural change that materially deepens Micron's embedding in customer roadmaps. CY2026 HBM is fully booked and CY2027–2028 capacity is increasingly committed. Samsung and SK Hynix remain qualified alternatives, but multi-year contracts now create meaningful cross-generation switching cost.
N/A — memory is a commodity input; Micron is not a system of record for any business function; customers source from all three suppliers simultaneously.
Combined average of Moat (AI Resilience), Growth, and Valuation scores.
Moat Score
An oligopoly of three (Samsung, SK Hynix, Micron) with high capital barriers to entry, but commodity memory pricing limits true moat durability. HBM4 — already in high-volume production a quarter ahead of plan — has shifted from differentiation in theory to validated execution, and the move from spot pricing to 3–5-year HBM supply agreements meaningfully dampens cyclicality and entrenches Micron with named hyperscalers.
Growth Score
Micron is in the steepest part of a semiconductor supercycle driven by HBM4 demand from AI infrastructure. Q2 FY2026 delivered record revenue of $23.86B (+196% YoY, +75% sequential) and non-GAAP EPS of $12.20 — beating the $20.1B consensus by ~19%. Management guided Q3 FY2026 to ~$33.5B in revenue and ~$19.15 non-GAAP EPS (+260% YoY). The CY2026 HBM book is fully sold and customers are now signing 3–5 year supply contracts. With HBM TAM growing from $35B (2025) to $100B (2028), Micron's revenue quality is structurally improving, though memory cyclicality remains a tail risk into 2027.
Valuation Score
MU has rallied from ~$578 to ~$745 in the weeks since the prior refresh, including a 14% spike on May 8 to a fresh 52-week high of $748. Consensus FY2026 EPS has been re-rated from ~$32 to ~$58 following the Q2 FY2026 blow-out and Q3 guide of ~$33.5B / ~$19.15 EPS. At ~$745, Micron trades at ~13× FY2026 EPS — still optically cheap vs. semiconductor peers, but multi-year HBM contracts (not multiple expansion) are doing most of the work. The price now sits ~7% below the revised base of $800 and ~33% below the new bull of $1,100; meaningful margin of safety has compressed but has not disappeared.
Oligopoly with Thickening Walls
Micron's competitive position rests on Oligopoly Structure, HBM4 Execution, and Multi-Year Supply Lock-In — the last two have strengthened materially this cycle:
- Three-Player Oligopoly: With Samsung, SK Hynix, and Micron controlling ~95% of DRAM supply, the market is structurally oligopolistic. New entrants face $30B+ capex requirements and decade-long learning curves that effectively preclude competition. Micron is the only US-based survivor of what was once a much larger industry.
- HBM4 in Volume — One Quarter Early: Micron's HBM4 — pin speeds above 11 Gb/s with in-house CMOS and advanced metallization on the base logic die — entered high-volume production in Q1 CY2026, a full quarter ahead of management's prior guidance. Q2 FY2026 revenue of $23.9B (+196% YoY) and Q3 guidance of ~$33.5B confirm the ramp is yielding well at scale, the riskiest unknown a quarter ago.
- From Spot Pricing to 3–5 Year Contracts: The 2026 HBM book is fully sold out and customers are now signing three- to five-year supply agreements — a structural shift from the historical quarterly negotiation pattern that permanently improves Micron's revenue visibility. Outside HBM, standard DRAM and NAND remain commodity products vulnerable to oversupply (Micron posted $5.8B in net losses in 2022–23), and the $20B+ FY2026 capex commitment still creates execution risk if AI capex normalises.
Ten Moats Verdict
Micron is a clear net beneficiary of AI — the HBM4 supercycle is directly driven by AI infrastructure build-out, talentScarcity and proprietaryData are strengthened by AI's demand for specialised chip design, and the shift to 3–5 year HBM contracts has upgraded transactionEmbedding from weakened to intact. The moat is materially better than a year ago, but Micron still does not own a software layer, a data flywheel, or a network effect that compounds independently of the hardware cycle, so durability remains tied to the HBM margin premium holding through CY2027–2028.
N/A — Micron is a B2B semiconductor manufacturer with no consumer interface lock-in.
N/A — memory chips have no embedded business-logic moat.
N/A — Micron does not derive competitive advantage from public data access.
Leading-edge DRAM and HBM process engineers (sub-1β node specialists, HBM4 base-die architects, advanced metallization specialists) are among the scarcest technical talent globally. Micron's Boise R&D center is a decade-deep talent cluster that competitors cannot quickly replicate. AI strengthens this moat — designing HBM4 base logic dies in-house requires irreplaceable human expertise.
Micron sells DRAM, NAND, and HBM as distinct products with limited bundling; some system-level memory solutions exist but don't create meaningful lock-in vs. Samsung or SK Hynix.
Proprietary DRAM cell designs (1-gamma node), HBM4 base-die CMOS architecture, advanced metallization processes, and yield-learning data from high-volume HBM production represent genuine IP. In-house logic die design (vs. competitors outsourcing) is a defensible advantage AI cannot easily replicate.
CHIPS Act $6.4B in total grants for Idaho and New York fabs makes Micron a designated US national security asset. The US government has an explicit interest in Micron's success as the only US-based DRAM manufacturer — and export controls on Samsung/SK Hynix to China further entrench Micron's strategic position.
N/A — no network effects exist in commodity memory; customers buy on price, availability, and quality specifications, not ecosystem lock-in.
HBM qualification is specific per GPU generation, and the industry has shifted from quarterly DRAM negotiations to 3–5 year HBM supply contracts with named hyperscalers — a structural change that materially deepens Micron's embedding in customer roadmaps. CY2026 HBM is fully booked and CY2027–2028 capacity is increasingly committed. Samsung and SK Hynix remain qualified alternatives, but multi-year contracts now create meaningful cross-generation switching cost.
N/A — memory is a commodity input; Micron is not a system of record for any business function; customers source from all three suppliers simultaneously.
Growth Analysis
Growth Drivers
Key Risk
If AI hyperscaler capex enters a pause cycle in 2H CY2027, Micron's $20B+ FY2026 capex commitment creates overcapacity risk; gross margins compress below 40% and FY2028 EPS reverts toward $25–30, triggering a 2022-style multiple compression even with multi-year HBM contracts cushioning the floor
Score Derivation
Base 91 (30–40% blended CAGR midpoint from HBM4 ramp + AI data center DRAM) + 3 trajectory (2 of 3 drivers accelerating) + 4 margin expansion + 3 TAM expansion (HBM TAM $35B→$100B by 2028) − 5 moderate cyclical risk (commodity DRAM/NAND exposure; $20B+ capex creates overcapacity risk into 2H CY2027) ≈ 96
HBM4 Demand Structural Shift
Price Scenarios (12–24 Months)
Valuation Analysis
At ~$745, Micron trades at ~13× FY2026 consensus EPS of ~$58 — well below software peers at 20–30× and consistent with historical memory peaks at 8–13× peak EPS. The structural shift to 3–5 year HBM supply contracts arguably justifies the upper end of that band; if FY2027 EPS holds above $55–60, base case $800 is reasonable. PEG ~0.4 signals deep growth-adjusted value, but cycle reversion risk into 2H CY2027 caps how aggressively the multiple can re-rate. ~$800.
Valuation Multiples
| Trailing P/E (GAAP) | ~31× |
| Forward P/E (NTM) | ~13× |
| PEG Ratio | ~0.4× |
| Price / Sales (NTM) | ~8.5× |
| Price / FCF | ~25× |
Forward P/E of ~13× remains deeply cheap vs. semiconductor peers at 25–35×, and PEG ~0.4 signals growth at a heavy discount to its rate. The gap between trailing (~31×) and forward (~13×) reflects an earnings ramp from ~$24 TTM to ~$58 FY26, validated by Q2 FY26's $12.20 non-GAAP print and Q3 guidance of ~$19.15 EPS in a single quarter. The risk is not the FY26 number — it's FY28: if HBM oversupply emerges, EPS could revert toward $25–30 and a 12–13× multiple implies $300–390, so upside from here is contingent on multi-year HBM contracts holding the margin floor.
Approximate figures as of May 2026.
Where We Are vs Targets
Loading live price…
Memory cycle reversion: AI hyperscaler capex pauses in 2H CY2027, Samsung and SK Hynix flood HBM4 capacity, and the multi-year contracts cushion but do not prevent a 2022-style downcycle.
- AI hyperscaler capex pause in 2H CY2027 reduces incremental HBM demand; renegotiated pricing on existing contracts
- Samsung and SK Hynix close the HBM4 yield gap; Micron's CY2027–28 share advantage erodes
- Standard DRAM and NAND prices fall 30%+ as PC/server demand disappoints; FY28 EPS reverts toward $25–30
- Multiple compresses to ~13–15× trough earnings → ~$400 implies $30 EPS at 13×
HBM4 supercycle sustains through CY2027; Micron delivers FY2026 EPS of ~$58 (Q2 actual + Q3 guide already imply ~$31 in 1H), and 3–5 year contracts hold gross margins above 55% into FY2027.
- FY2026 revenue tracks toward ~$95–100B as Q3 ($33.5B) and Q4 (~$30–35B) guidance materialises
- HBM4 maintains ~50–55% Micron market share through CY2027 with multi-year contracts in place
- FY2027 EPS holds at $55–60 as HBM trade ratios keep standard DRAM tight; gross margin above 55%
- Stock trades at ~13× FY2027 EPS — consistent with historical memory peaks but with structurally better visibility
HBM becomes the dominant AI inference architecture, multi-year contracts entrench Micron with NVIDIA and the hyperscalers, and gross margins permanently re-rate above 60%.
- HBM4E and HBM5 commitments extend supply contracts to 5+ years; Micron captures 30%+ HBM share
- AI inference at scale creates a second demand wave beyond training — HBM TAM exceeds $100B estimate by 2028
- Near-memory compute integration (PIM/CXL) opens adjacent TAM, further differentiating Micron from commodity DRAM
- FY2027 EPS reaches $70–75; stock re-rates to ~15× as HBM is treated less like memory, more like accelerator content