• Glitchvid@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    3 days ago

    That’s basically what I said in so many words. AMD is doing its own thing, if you want what Nvidia offers you’re gonna have to build it yourself. WRT pricing, I’m pretty sure AMD is typically a fraction of the price of Nvidia hardware on the enterprise side, from what I’ve read, but companies that have made that leap have been unhappy since AMD’s GPU enterprise offerings were so unreliable.

    The biggest culprit from what I can gather is that AMD’s GPU firmware/software side is basically still ATI camped up in Markham, divorced from the rest of the company in Austin that is doing great work with their CPU-side.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 days ago

      WRT pricing, I’m pretty sure AMD is typically a fraction of the price of Nvidia hardware on the enterprise side

      I’m not as sure about this, but seems like AMD is taking a fat margin on the MI300X (and its sucessor?), and kinda ignoring the performance penalty. It’s easy to say “build it yourself!” but the reality is very few can, or will, do this, and will simply try to deploy vllm or vanilla TRL or something as best they can (and run into the same issues everyone does).

      The ‘enthusiast’ side where all the university students and tinkerer devs reside is totally screwed up though. AMD is mirroring Nvidia’s VRAM cartel pricing when they have absolutely no reason to. It’s completely bonkers. AMD would be in a totally different place right now if they had sold 40GB/48GB 7900s for an extra $200 (instead of price matching an A6000).

      The biggest culprit from what I can gather is that AMD’s GPU firmware/software side is basically still ATI camped up in Markham, divorced from the rest of the company in Austin that is doing great work with their CPU-side.

      Yeah, it does seem divorced from the CPU division. But a lot of the badness comes from business decisions, even when the silicon is quite good, and some of that must be from Austin.

      • Glitchvid@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        3 days ago

        The ‘enthusiast’ side where all the university students and tinkerer devs reside is totally screwed up though. AMD is mirroring Nvidia’s VRAM cartel pricing when they have absolutely no reason to. It’s completely bonkers. AMD would be in a totally different place right now if they had sold 40GB/48GB 7900s for an extra $200 (instead of price matching an A6000).

        Eh, the biggest issue here is that most (post-secondary) students probably just have a laptop for whatever small GPGPU learning they’re doing, which is overwhelmingly dominated by Nvidia. For grad students they’ll have access to the institution resources, which is also dominated by Nvidia (this has been a concerted effort).

        Only a few that explicitly pursue AMD hardware will end up with it, but that also requires significant foundational work for the effort. So the easiest path for research is throw students at CUDA and Nvidia hardware.

        Basically, Nvidia has entrenched itself in the research/educational space, and that space is slow moving (Java is still the de facto CS standard, with only slow movements to Python happening at some universities), so I don’t see much changing, unless AMD decides it’s very hungry and wants to chase the market.

        Lower VRAM prices could help, but the truth is people and intuitions are willing to pay more (obviously) for plug and play.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 days ago

          I dunno. From my more isolated perspective on GitHub and small LLM testing circles, I see a lot of 3090s, 4090s, sometimes arrays of 3060s/3090s or old P40s or MI50s, which people got basically for the purpose of experimentation and development because they can’t drop (or at least justify) $5K.

          They would 100% drop that money on at least one 7900 48GB instead (as the sheer capacity is worth it over the speed hit and finickiness), and then do a whole bunch of bugfixing/testing on them. I know I would. Hence the Framework Strix Halo thing is sold out even though it’s… rather compute-lite compared to a 3090+ GPU.

          It seems like a tiny market, but a lot of the frameworks/features/models being developed by humble open source devs filter up to the enterprise space. You’d absolutely see more enterprise use once the toolkits were hammered out on desktops… But they aren’t, because AMD gives us no incentive to do so. A 7900 is just not worth the trouble over a 3090/4090 if its VRAM capacity is the same, and this (more or less) extends up and down the price ranges.