Every single encoder, regardless or hardware or software will output different output quality. Video Encoding is a lossy encoding, not lossless like ZIP or RAR. That is the same with Audio that is why people test different audio encoder. It seems more people knows this about Audio than Video. At least on HN.
GPU encoding quality has always been lower quality than Software. Primary because they trade off quality for speed. It is good enough once you hit certain bitrate, but at low bitrate where absolutely encoding efficiency is required Hardware Encoding just dont compare to software. Even with Software you could have multiple encoder with different results. It is not like H.266 / HEVC only has x266 as encoder, example Netflix uses BEAMR. And Broadcasting TV station uses other encoder that better suits their needs.
And it is one reason why I dislike all these AV1 discussion, whenever people said it is slow they said use SVT-AV1, well yes SVT-AV1 is faster but dont produce the best AV1 quality encode. So what is the point. It is like every time these discussions about AV1 AOM supporters will just move the goal post.
Do you blanketly refuse to encode with h.264 or h.265? Because they're always worse than the best AV1 encode too.
If you only use h.266 and you're willing to wait extreme amounts of time to encode, then that's valid, but understand that you're an outlier. Most people don't have that much time to spend on encoding.
You don't need to find the best possible encoding. SVT-AV1 can encode as fast as x264, keeping the same visual quality to human eyes while reducing bit rate by 50%.
If you want to retain visual quality, you always have an option to use higher bit rate.
If we assume that to be true (VMAF and SSIM aren’t the whole picture), just keep in mind that’s only true at particular speeds, bitrates, and types of content.
What I should say is, please show me an AV1 encoder that can beat x264 in quality at a source-transparent encode of a 4K or 1080p Blu-ray film, if given ~3GB per hour of content and the encode had to run in at least 3x the runtime. I’d start using it! It may be there as of recently, it’s been a year or two since I looked.
This is a usecase where AV1 encoders and even x265 have had trouble.
No. Decoding is a job mostly done by specialized hardware - the shader units are used sometimes, before a fully fixed function implementation is ready. Encoding in particular doesn’t map well to GPUs. They can do it, using varying degrees of fixed function and shader cores, and it’s nice to isolate that load from the CPU, but they implement fewer of the analysis, prediction, and psychovisual optimization tricks that x264 and x265 use, and less of the optional features of the format. They often can beat software at specific, fast speeds with lower power consumption, but the trade-off is being inflexible and unuseful for making encodes that are transparent to the source.
I mean the thing with something like SVT-AV1 is, even if it doesn’t give you the best efficiency encode, does it do a more efficient encode than your alternatives in a reasonable timeframe.
GPU video encoding is pretty much always optimised for real-time encoding, meaning that it can't run certain optimisations as it would increase the time to encode.
Compare x264 veryfast and veryslow presets. There is a quality difference at the same bitrate.
Additionally, GPU encoders don't have as many psychovisual options as CPU encoders as they would need to be included in the hardware and adding extra options to CPU encoders is much faster, easier and cheaper.
You could build a non-realtime GPU encoder, but there is not much point.