Micron is reportedly working on a major architectural shift in graphics memory that could change how mid-range GPUs and AI accelerators are designed. According to recent reports, the company is developing vertically stacked GDDR, a new memory category intended to sit between the “standard” discrete GDDR we see on consumer graphics cards and the High Bandwidth Memory (HBM) used in data centre applications.
According to ITHome, the core philosophy behind stacked GDDR is to offer HBM performance without its price tag. By stacking memory dies vertically within a single package, Micron can significantly increase VRAM density and bandwidth per pin. This approach aims to bypass the need for silicon interposers, the complex and costly bridges required to connect HBM to a GPU.
While AI training requires the absolute maximum throughput provided by HBM, AI inference is often hungrier for raw capacity and cost-efficiency. Stacked GDDR could potentially allow a mid-range AI accelerator or a professional workstation card to carry massive amounts of VRAM without the astronomical costs currently associated with HBM-equipped hardware.
Micron is reportedly aiming to deliver the first functional prototypes in 2027. If the roadmap holds, this technology could become a cornerstone for the next generation of “AI PCs” and mid-tier enterprise hardware.
KitGuru says: Do you think stacked GDDR will become a new standard?
KitGuru KitGuru.net – Tech News | Hardware News | Hardware Reviews | IOS | Mobile | Gaming | Graphics Cards

