The graphic above summarizes the median 512x512 render speed (batch size 1) for various GPUs. Filtering is for single-GPU systems only, and for GPUs with more than 5 benchmarks only. Data is taken from this database (thank you vladmandic!). Graph is color-coded by manufacturer:

  • NVIDIA consumer (lime green)
  • NVIDIA workstation (dark green)
  • AMD (red)
  • Intel (blue), seems there’s not enough data yet

This is an update/prettier visualization from my previous post using today’s data.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    It is very helpful to see this graph with laptop options included. Thank you!

    • Is it correct to call this a graph of the empirically measured, parallel computational power of each unit based on the state of software at the time of testing?

    • Also will the total available VRAM only determine the practical maximum resolution size? Does this apply to both image synthesis and upscaling in practice?

    I imagine thermal throttling is a potential issue that could alter results.

    • Are there any other primary factors involved with choosing a GPU for SD such as the impact of the host bus configuration (PCIE × n)?
    • jasparagus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The link in the header post includes the methodology used to gather the data, but I don’t think it fully answers your first question. I imagine there are a few important variables and a few ways to get variation in results for otherwise identical hardware (e.g. running odd versions of things or having the wrong/no optimizations selected). I tried to mitigate that by using medians and only taking cards with 5+ samples, but it’s certainly not perfect. At least things seem to trend as you’d expect.

      I’m really glad vladmandic made the extension & data available. It was super tough to find a graph like this that was remotely up-to-date. Maybe I’ll try filtering by some other things in the near future, like optimization method, benchmark age (e.g. eliminating stuff prior to 2023), or VRAM amount.

      For your last question, I’m not sure the host bus configuration is recorded – you can see the entirety of what’s in a benchmark dataset by scrolling through the database, and I don’t see it. I suspect PCIE config does matter for a card on a given system, but that its impact is likely smaller than the choice of GPU itself. I’d definitely be curious to see how it breaks down, though, as my MoBo doesn’t support PCIE 4.0, for example.

      • FactorSD@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        For your last question, I’m not sure the host bus configuration is recorded – you can see the entirety of what’s in a benchmark dataset by scrolling through the database, and I don’t see it. I suspect PCIE config does matter for a card on a given system, but that its impact is likely smaller than the choice of GPU itself. I’d definitely be curious to see how it breaks down, though, as my MoBo doesn’t support PCIE 4.0, for example.

        I too would be interested to see how that breaks down. I’m sure that interface does have an impact on raw speed, but actually seeing what that looks like in terms of real life impact would be great to know. While I doubt there are many people who are specifically interested in running SD using an external GPU via Thunderbolt or whatever, for the sake of human knowledge it’d be great to know whether that is actually a viable approach.

        • jasparagus@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’ve always loved that external GPUs exist, even though I’ve never been in a situation where they were a realistic choice for me.

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Hey thanks again for sharing this. I spent all afternoon playing with the raw data. I just finished doing a spreadsheet of all of the ~700 entries for Linux hardware excluding stuff like LSFW. I spent way too long trying to figure out what the hash and HIPS numbers corelate to and trying to decipher which GPU’s are likely from laptops. Maybe I’ll get that list looking a bit better and post it on paste bin tomorrow. I didn’t expect to see so many AMD cards on that list, or just how many high end cards are used. The most reassuring aspect for me to see are all of the generic kernels being used with Linux. There were several custom kernels, but most were the typical versions shipped with a LTS distro.

        • jasparagus@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’m a Windows caveman over here, but you should definitely post your Linux findings on here when you’re ready! Also, if you have suggestions for how to slice this, I’d be happy to take a stab at it. This was a quick thing I did while ogling a GPU upgrade, so it’s not my best work haha.

          • j4k3@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            I made a post on c/Linux about this today (https://lemmy.world/post/1112621) and added a basic summary of the Linux specific data. I’m still trying to figure everything out myself. I had to brute force the github data for a few hours in Libra Calc (FOSS excel) to distil it down to something I could understand, but I still have a ton of questions.

            Ultimately I’ve just been trying to figure out what the most powerful laptop graphics cards are and which ones have the most RAM. So far, it looks like the RTX3080Ti came in some laptops and has 16GB of RAM. Distilling out the full spectrum of laptop options is hard. Anyone really into Linux likely also wants to avoid nvidia as much as possible so navigating exactly where AMD is at right now is a big deal. (The AMD stuff is open source with good support from AMD while nvidia is just a terrible company that does not give a rats ass about the end user of their products.) I do not care if SD on AMD takes twice as long to do the same task as nvidia, so long as it can still do the same task. Open source means full ownership to me and that is more important in the long term.