• Yote.zip@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      No one uses hardware decoding for images - it’s just not a good fit for the reality of how we use images. Images are small and easy to decode, whereas starting up a hardware decoder takes a non-trivial amount of time. Additionally, GPU decoders only work single-threaded, so each image would have to be decoded one by one, instead of all at once like with CPU decoding. This was already attempted with VP8/WebP and they gave up trying to make it any good. Videos are good candidates for hardware decoding since they’re large and you’re only looking at one at a time.

      If you have benchmarks or some proof showing otherwise by all means post here.