TL;DR: Does the Arc A310 have any important advantage over recent Intel low-power CPUs with integrated graphics (e.g. N100/N150/N350/N355) specifically for use with Jellyfin, in terms of the number of streams it can transcode simultaneously or something like that?

Even if they do differ, is it something I would notice in a household context (e.g. with probably never more than 4 users at a time), or would the discrete GPU just be overkill?

context, if you need it

My Jellyfin is currently running in a VM on a Proxmox server with a Ryzen 5 3600 CPU and Vega 56 discrete GPU that draws a lot of power unnecessarily and apparently isn’t recommended for Jellyfin transcoding due to lack of encoder quality. I’m thinking about either replacing the GPU with an Arc A310 for ~$100 or replacing the whole CPU/mobo/GPU with some kind of low-power Intel ITX board (the kind designed for routers or NASs, with a soldered-on N100 or similar) for ~$200. I’m leaning towards the latter because it would use less power, be simpler to set up (since, as I understand it, integrated GPU functions are always available instead of needing to be passed through and dedicated to a single VM/container) more versatile in the future (e.g. as a NAS or router), and be a whole additional system, freeing up the AMD hardware for some other use.

But is the N100 option just strictly equal or better for Jellyfin, or is there some other performance trade-off?

(BTW, I know the Arc uses Intel Quick Sync Video version 9 while the N100 uses version 8, with the difference between them being that the newer version supports 8K 10-bit AV1 hardware encoding. I’m not going to be encoding 8K any time in the foreseeable future, so I don’t care about that.)

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    I think the integrated GPUs since 7th gen can do like 20+ 1080p transcodes and 4+ 4k transcodes.

    integrated GPU functions are always available instead of needing to be passed through and dedicated to a single VM/container

    They work the same, but you can share GPU acceleration with multiple containers and still have it available to the host, it’s only VMs that need a dedicated GPU.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      Doing 2 4K-1080p 50kbit transcodes on my i5-1145G7. And that is an intel Iris Xe iGPU.
      Dunno if others perform better but that’s how this one performs with the lsio version of Jellyfin + openCL docker mod)

    • grueOP
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      They work the same, but you can share GPU acceleration with multiple containers and still have it available to the host, it’s only VMs that need a dedicated GPU.

      Aha, thanks for the clarification!

  • Maxy@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    How often do you actually transcode? Most jellyfin clients are capable of decoding almost all codecs. It might be worth checking if you need to encode frequently, let alone encode multiple streams at once, before considering how many streams different hardware might support.

    To answer your question: the A310 and N100 appear to be pretty evenly matched when it comes to max number of streams. Intel claims that all Arc hw encoders can encode 4 AV1 streams at 4K60, but that the actual performance might be more limited by the amount of VRAM on the card. Since any iGPU would have acces to normal system RAM, which is probably a lot more than 4GB, these iGPU’s might even be capable of running more parallel streams.

    One thing you might want to consider: the A310 has significantly more compute power than the iGPU’s in these processors. This matters if you ever decide to run a local ML model. For example, I backup all my pictures to nextcloud on the same machine that runs jellyfin, and I use the recognise app to perform facial and object recognition. You can also run this model in CPU mode though, and the performance is “good enough” in my i5 3470, so a dGPU might be overkill for this purpose. You could also run local LLM’s, text2speech, speech2text, or similar models, should you care about that.

    If I may add a 3rd option though: consider upgrading to a 5600G or something similar. It has more CPU power than a N350 (3x according to passmark), and the iGPU probably had more than enough hwaccell (though software encoding is also very viable with that much CPU power). You wouldn’t free up the AMD hardware this way, and the 5600G doesn’t support AV1, which could be a dealbreaker I guess.

    • grueOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 month ago

      To answer your question: the A310 and N100 appear to be pretty evenly matched when it comes to max number of streams. … Since any iGPU would have acces to normal system RAM, which is probably a lot more than 4GB, these iGPU’s might even be capable of running more parallel streams.

      Thanks, that indeed answers my question!

      One thing you might want to consider: the A310 has significantly more compute power than the iGPU’s in these processors. This matters if you ever decide to run a local ML model. … You could also run local LLM’s, text2speech, speech2text, or similar models, should you care about that.

      If were to decide I need compute, I could just put my AMD GPU back in. Even at 8 years old, my Vega 56 should still be way better than an A310 for that.

      Or if I were going to get an Intel discrete GPU and had compute as a use case, I’d be talking about the B580 instead of the A310. Edit: I just found out the Arc Pro B50 is a thing. I’d definitely be going for that instead of a B580.

      (Or if I got really desperate, I’m pretty sure there’d be a way to let a parallel-compute-hungry service running on my Proxmox server remotely access the RX 9070 XT on my gaming PC, with enough fiddling around.)

      If I may add a 3rd option though: consider upgrading to a 5600G or something similar. … the 5600G doesn’t support AV1, which could be a dealbreaker I guess.

      According to this, “AMD also chose to reuse the 7nm Vega graphics engine instead of incorporating newer RDNA variants,” which means it isn’t any better for the purpose of transcoding than the discrete Vega I already have (except for using less power). Also, and more to the point, the Jellyfin Hardware Selection guide straight-up says “AMD is NOT recommended.” Any AMD, whether integrated or discrete and no matter how new. And then it says it again in bold text! That’s why I’d pretty much ruled out that option before I posted my question.

      (In contrast, the same page says “if you do not need CUDA for other applications, it is highly recommended that you stick with Intel Graphics on Linux” and specifically recommends the Intel N100/12th-gen N-series in a couple of places.)

      I appreciate the thinking outside the box, though!

      • Maxy@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        If were to decide I need compute, I could just put my AMD GPU back in.

        Yeah… It was pretty late in my timezone when I replied, which I’ll use as an excuse for not considering that. That would be a good solution.

        which means it isn’t any better for the purpose of transcoding than the discrete Vega I already have (except for using less power).

        I thought reducing power usage was the main goal, that’s why I suggested this. Though once again, pretty decent chance this is a misunderstanding on my part.

        AMD is NOT recommended.

        I personally use AMD graphics in both a laptop and a desktop, and have never had any problems with decode or encode; I don’t understand what the docs mean with “poor driver support”.

        What I will confess (and once again, forgot to consider yesterday) is that intel and Nvidia hardware encoders generally provide better quality at the same bitrate than AMD*. I do believe software encoders perform better than all hardware encoders in this aspect, which is why I never cared too much about the differences between HW encoders. If I need good quality for the bitrate, I’ll just use the CPU. This is less energy-efficient though, so I guess having a good HW encoder could be pretty relevant to you.

        *I happen to have hardware from AMD, intel and nvidia, so I might do my own tests to see if this still holds true.

    • Derpgon@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      I usually transcode due to using Jellyfin from outside my network, which has 30mbps uplink, and it is barely enough for 2k.