What is Z-Angle Memory and why is Intel developing it?

(hpcwire.com)

72 points | by rbanffy 2 days ago

8 comments

  • plufz 3 hours ago
    Where is the music video for us who only want to learn about low level hardware through that medium?

    Get perpendicular: https://www.dailymotion.com/video/x62mja

    • port11 2 hours ago
      Off-topic, but 574 analytics and advertising partners is a few too many. Like… can you make money with just 200 ad partners?
      • plufz 2 hours ago
        I agree, insane.
    • staticshock 2 hours ago
      That song is etched into my memory and bring back a flood of great feelings. Daily Motion is giving me a playback error, but here's a copy from youtube: https://www.youtube.com/watch?v=xb_PyKuI7II
  • fishgoesblub 21 minutes ago
    I have a feeling this will go the way of Optane and once it becomes useful they'll pull it and shelve it while keeping the patents/license of course so no one else can improve on it
  • nxobject 4 hours ago
    But will this go the way of a “non core” product like Optane (or modems for that matter?)
    • ajb 1 hour ago
      This is not a drastically different technology like optane - it's almost solely a packaging change. It's applicable to exactly the same markets as normal DRAM, so if it dies customers will just switch to whatever DRAM variant wins instead.
  • nine_k 6 hours ago
    The article says nothing about the construction or special qualities of ZAM, as compared to HBM :(
    • ThrowawayR2 5 hours ago
      There doesn't seem to be much detail anywhere else either. All I was able to gather was that the memory dies are stacked (not new) but that the vias connecting the stack are angled instead of straight up and down and this is better because ... reasons?
      • rbanffy 25 minutes ago
        I think the reasoning is better thermals and signal stability. The physics of the first seem to be that there is more metal to capture and distribute heat, but the signal integrity part beats me. By having better thermals, they can increase the memory clock and, thus, bandwidth and reduce latency.
    • jacknews 6 hours ago
      Indeed, what is it? The article doesn't say, only espouses the supposed benefits.
      • p_ing 5 hours ago
        • nine_k 4 hours ago
          «[T]he primary standout feature of this memory solution is the integration of a staggered interconnect topology that routes connections diagonally within the die stack rather than drilling straight down. According to Intel, the biggest benefit lies in ZAM's thermal capabilities.»

          The connectors on the side indeed look like the letter Z. Maybe it disperses the stronger currents across the stack of the crystals, instead of concentrating.

          • cogman10 4 hours ago
            I'd guess that it'd allow for thinner layers which is ultimately why you can pack in more memory.

            And why it's not currently done is likely because it's hard enough to stack when everything is uniform. A small deformity in the first layer will spoil the entire chip.

  • rayiner 5 hours ago
    It’s crazy that we have stalled on the structure of the basic DRAM cell for decades now.
    • cogman10 4 hours ago
      Not that crazy. It's about the most basic structure you can make. Hard to make a better wheel.

      The closest thing I can think of that's come close to maybe challenging DRAM is HP's memristors but those really didn't pan out (probably too much power consumption).

      • yvdriess 3 hours ago
        > Hard to make a better wheel.

        Pet peeve: stupid analogy seeing how wheels kept being improved throughout the millennia with every new technology. The only thing in common is that it's round.

        Similarly, DRAM in any way you see it has been improving to the point of barely being recognizable since the 70s.

        That said, DIMMs and the whole bus idea is in dire need of getting a new type of bearing.

        • rbanffy 20 minutes ago
          > That said, DIMMs and the whole bus idea is in dire need of getting a new type of bearing.

          IBM has been using their own memory bus technology for both their POWER and Z machines. IIRC, it’s somewhat reminiscent of CXL, trading latency for bandwidth and size.

        • cogman10 2 hours ago
          Seems like a pretty good analogy as you admit there have been advancements while the basic structure has remained the same. Not sure why you have a pet peeve when it is highly analogous.

          The ultimate shape of DRAM is the same, the main thing that's changed is the materials and techniques to produce it. Making it very impressive, but none the less completely recognizable by someone who was familiar with DRAM in the 70s.

          The wheel is the same. Pluck someone from 1000 years ago and they'll be able to correctly identify a modern wheel even though they've never seen any of the composites that go into it. The function of the wheel is identical to how they used it.

      • gavmor 3 hours ago
        You're right—the wheels on the Boeing 737 are, although made of forged aluminum or magnesium to withstand extreme force and heat, pretty much the same shape and operate in the same way as the Ljubljana Marshes Wheel of 3150 BCE.

        Then again, flight itself has obviated—or, rather, introduced—many transit workloads that could be performed by wheeled vehicles, and operates on different principles entirely.

  • jauntywundrkind 4 hours ago
    This wccf article also doesn't do a great job of describing, but the third slide it shows is very illustrative: rather than stack horizontally it stacks dram on its side. https://wccftech.com/intel-zam-memory-threatens-hbms-ai-thro...

    I thought this was going to mean each stack was able to directly talk to the controller, since all stacks are resting on an interposer thing. But actually there is still a logic controller slice at the bottom of the stack, not at a right angle to the stack.

    Instead of HBM microbumps between layers there is a more compact/dense TSV ("fusion bonded via-in-one") system. Intel once more showing their strong chiplet packing prowess! The claim is that thermals are still much better somehow, in spite of volumetric cell density increasing (from thinner layers). The demo has 8+1 dram+controller layers.

  • lysace 4 hours ago
    Intel does these "throw spaghetti on the wall" kind of investments into potientially interesting companies/technologies all of the time - and have done so for decades.

    Every time the recipient hypes the shit out of it, of course.

    • JoshTriplett 3 hours ago
      The main problem is that they often don't stick with it.

      As far as I can tell, Intel more-or-less pioneered the idea of SSDs being the best storage rather than the cheap storage, for instance. The X25-M and X25-E were absurdly good. Then, once the market was established...they pulled out of it.

    • behaviors 3 hours ago
      Most of the big hit's in tech had a trendy index swinging moment, Intel has been searching for one for a long time since AMD64 undercut the Itanium. Hype drives a currently multi-billion dollar bubble. It's not always a bad idea to throw our holy noodles at the wall. You might find they hover is the sky and grow meatballs, could be big.
      • to11mtm 2 hours ago
        Well, peak weirdness was the thing involving Will-i-am from the Black Eyed Peas as a 'Futurist'/Spokesperson/IDEK.

        I think what's semi-unfortunate is all the swings and misses, especially the cases where it wasn't necessarily a bad idea but Intel gives up too soon;

        - Massively parallel simple-ish x86 cores a-la Xeon Phi; okay maybe not the best idea on the surface but I feel like nowadays the opportunities could be more forthcoming with how to reuse parts of that tech (And maybe they do but are just quiet about it... i.e. GPU acceleration)

        - Optane. I think the tech would have been cheaper if they made terms for licensing easier, but maybe I'm missing part of the equation...

        - This thing where they keep half assing the GPU strategy; Imagine if B70 launched last year alongside the B60 and B50, before DRAM prices went sideways. Or if they didn't take so long to release a >16GB GPU in the first place; that would have built a lot of interest, but instead they finally release a 32GB GPU alongside more bad news for the overall roadmap. The whole situation instead becomes a jarring rollercoaster that makes everyone worry that Intel is gonna kill the project the way everything but CPUs gets killed lately.

  • SadErn 4 hours ago
    [dead]