Samsung and AMD just signed a letter of intent to work together on next-gen “AI memory,” and they’re also sniffing around a deeper relationship on the manufacturing side. Translation: the companies aren’t announcing a shiny new product you can buy tomorrow. They’re planting a flag in the part of the AI boom that actually decides who ships and who slips, memory supply, packaging, and the factories that crank out the silicon.
This isn’t a full-blown commercial contract with prices, volumes, and delivery dates. A letter of intent is the corporate version of “we should talk”, but in semiconductors, even that can move markets. It signals two things at once: “We’re trying to lock down scarce parts,” and “We want customers and investors to believe our roadmap isn’t fantasy.”
For Samsung, this hits two pressure points: memory (where it’s a longtime heavyweight) and its foundry business (where it’s still chasing Taiwan’s TSMC). For AMD, an elite chip designer that doesn’t own fabs, the goal is simple: keep the AI product cycle moving, because in this business, a quarter can feel like a year.
Details are thin. But the phrasing, next-generation AI memory plus “exploring” foundry cooperation, reads like a two-step: co-engineer memory and how it plugs into AMD’s accelerators, then see whether Samsung can also build some of the pieces (or package them) at scale.
AI chips are starving for memory bandwidth, and that’s the whole problem
The dirty secret of modern AI hardware is that raw compute isn’t the only bottleneck. You can pack an accelerator with math engines, but if data can’t get in and out fast enough, a chunk of that expensive silicon sits around twiddling its thumbs.
That’s why high-bandwidth memory (HBM) has become a kingmaker. HBM isn’t a nice-to-have add-on anymore; it’s a gating item for AI systems, right up there with the GPU/accelerator itself. And because HBM is typically stacked and placed extremely close to the compute die, memory choices ripple into architecture, packaging, thermals, yields, and the final bill.
So Samsung and AMD teaming up on “AI memory” is about co-design: tighter interfaces, better power behavior, fewer nasty surprises in qualification, and a clearer path to volume. In a market where big data center buyers care less about slide decks and more about what can actually ship, that matters.
Foundry flirtation: Samsung wants credibility, AMD wants options beyond TSMC
The second thread, foundry cooperation, is the spicier one. AMD has leaned heavily on TSMC for its most advanced chips. Nobody flips a switch and “moves” a leading-edge design to another foundry overnight; that’s years of work, re-qualification, toolchain pain, and yield roulette.
But “exploring” is still meaningful. It can mean multi-sourcing certain dies, using Samsung for specific components, tapping Samsung’s packaging capabilities, or lining up a Plan B if capacity gets tight. AI demand has been squeezing the whole supply chain, and chip companies hate being trapped behind someone else’s schedule.
For Samsung Foundry, landing more blue-chip customers is the whole game: fill lines, prove yields, and convince the industry it can compete with TSMC not just on paper specs, but in boring, brutal mass production.
The real battlefield is packaging: 2.5D integration and the “system” mindset
When people say “AI memory,” they’re often really talking about integration, especially 2.5D packaging, where multiple dies sit on an interposer to shorten connections and crank bandwidth. This is how the industry stitches together heterogeneous chiplets built on different processes without blowing the power budget.
In that world, you don’t optimize compute and memory separately. Packaging decisions drive heat, reliability, interconnect density, and cost. And advanced assembly and testing aren’t cheap, they’re some of the most complex steps in the pipeline.
Samsung has reach across memory, packaging, and manufacturing. AMD has deep experience with chiplets and multi-die designs across CPUs and accelerators. If they’re serious, the payoff is fewer integration headaches and cleaner “drop-in” building blocks for future AI platforms.
What this could mean for AMD accelerators, and Samsung’s memory business
Near-term, treat this as strategy, not a product launch. A letter of intent doesn’t promise volumes, pricing, or a ship date. The chip industry is littered with “partnerships” that never graduate past the lab.
Still, the upside is obvious. For AMD, tighter alignment with a major memory supplier could help secure HBM supply (or comparable high-bandwidth solutions) for AI systems where memory availability can decide whether you deliver racks on time, or miss the window and watch a rival take the deal.
For Samsung, being tied publicly to a major AI accelerator player is good business. The HBM fight is vicious, and credibility comes from real design wins and real shipments, not press releases.
The big unknown is how far the foundry talks go. If anything material happens, it may start small, pilot runs, specific components, or packaging, before anyone talks about Samsung building major AMD compute dies. For now, the message is blunt: the AI race is forcing companies to lock arms where it counts, and memory plus manufacturing muscle are becoming weapons as sharp as the chip architecture itself.
