I do a lot of video compression for hobby projects, and I stick with h264 for the most part because h265 encoding requires far too much extra compute relative to the space savings. I can spend an hour compressing a file down to 1gb with h264, or I can spend 12 hours compressing the same file to 850mb with h265. Depending on the use-case, I might still need the h264 version anyway since it's far more widely supported by clients. If I had a data center worth of compute to throw at encoding, or I were running a streaming service where the extra 150mb per video started to add up, then I'd definitely on-board with h265 but it's really hard to justify for a lot of practical use-cases.
> I stick with h264 for the most part because h265 encoding requires far too much extra compute relative to the space savings. I can spend an hour compressing a file down to 1gb with h264, or I can spend 12 hours compressing the same file to 850mb with h265.
FWIW, as someone who is a complete layman on this topic but has spent much of the past 6 months converting age-old videos to h265...
i have a pi5 where i queue up old videos into a RAM disk, it converts them to h265, and moves them back to the machine which uploaded them. It can take up to 2 days to compress any given video (depending mostly on the input format - old AVIs tend to compress in a few hours), but the space savings is _much_ higher than what you're reporting. The average size of my h265-encodes is anywhere from 1/3rd to 1/2 of the original, and 1/4th of the original is not terribly uncommon. That said: the input formats vary wildly, with possibly only a minority fraction being h264.
i've tried the same thing using a pi4 but it needs roughly 4-5x as long as the pi5. An 8-core intel-based laptop can do it in some 25% of the time of the pi5 but pulls a lot more electricity while doing so. So my otherwise idle pi5 gets the assignment.
My Firefly collection (14 tv episodes plus the movie) dropped from 8gb to 2.1gb. My Bob Ross collection (141 videos) went from 38gb to 7.4gb. 128 Futurama episodes/movies was cut from roughly 50gb to 21gb.
i was once asked why i didn't choose AV1 and the answer is simple: when i googled along the lines of "how to get the best video compression with ffmpeg" the most common answer was h265. Whether that's correct or not, i can't say, but it works for me and i've freed up the better part of a terabyte of space with it.
Interesting, your numbers look to be quite a bit better than the 30% improvement I've heard is the standard rule of thumb for how much improvement to expect with h.265, and on average for me it's been closer to 20%. I think h.265 starts to show much more meaningful improvements as you start using lower bitrates, and the specific type of media probably matters a bit as well.
I do most of my encoding on an i9-13900k. I've also noticed that h.265 seems to be quite a bit more likely to trigger the smt related crashes in ffmpeg, which makes it a pain when I've queued up a lot of things to encode over night.
What about when you’re recording (say your iPhone / android / standalone camera) - are you choosing h264 or h265 (or something else like an intra-frame only codec for easier editing).
Most of what I do is working with pre-recorded video, but for projects where I'm doing recording I tend to use h.264 since I don't have a lot of devices that support h.265 anyway, and picking one format with broad compatibility is generally more important to me than the efficiency gains I'd get using h.265 when I could.
I haven't actually done the comparison myself, but the common refrain is that the quality of the video suffers when you use GPU accelerated encoding. That's not a tradeoff I want to make, so I've just stuck with CPU encoding.
Every single encoder, regardless or hardware or software will output different output quality. Video Encoding is a lossy encoding, not lossless like ZIP or RAR. That is the same with Audio that is why people test different audio encoder. It seems more people knows this about Audio than Video. At least on HN.
GPU encoding quality has always been lower quality than Software. Primary because they trade off quality for speed. It is good enough once you hit certain bitrate, but at low bitrate where absolutely encoding efficiency is required Hardware Encoding just dont compare to software. Even with Software you could have multiple encoder with different results. It is not like H.266 / HEVC only has x266 as encoder, example Netflix uses BEAMR. And Broadcasting TV station uses other encoder that better suits their needs.
And it is one reason why I dislike all these AV1 discussion, whenever people said it is slow they said use SVT-AV1, well yes SVT-AV1 is faster but dont produce the best AV1 quality encode. So what is the point. It is like every time these discussions about AV1 AOM supporters will just move the goal post.
Do you blanketly refuse to encode with h.264 or h.265? Because they're always worse than the best AV1 encode too.
If you only use h.266 and you're willing to wait extreme amounts of time to encode, then that's valid, but understand that you're an outlier. Most people don't have that much time to spend on encoding.
You don't need to find the best possible encoding. SVT-AV1 can encode as fast as x264, keeping the same visual quality to human eyes while reducing bit rate by 50%.
If you want to retain visual quality, you always have an option to use higher bit rate.
If we assume that to be true (VMAF and SSIM aren’t the whole picture), just keep in mind that’s only true at particular speeds, bitrates, and types of content.
What I should say is, please show me an AV1 encoder that can beat x264 in quality at a source-transparent encode of a 4K or 1080p Blu-ray film, if given ~3GB per hour of content and the encode had to run in at least 3x the runtime. I’d start using it! It may be there as of recently, it’s been a year or two since I looked.
This is a usecase where AV1 encoders and even x265 have had trouble.
No. Decoding is a job mostly done by specialized hardware - the shader units are used sometimes, before a fully fixed function implementation is ready. Encoding in particular doesn’t map well to GPUs. They can do it, using varying degrees of fixed function and shader cores, and it’s nice to isolate that load from the CPU, but they implement fewer of the analysis, prediction, and psychovisual optimization tricks that x264 and x265 use, and less of the optional features of the format. They often can beat software at specific, fast speeds with lower power consumption, but the trade-off is being inflexible and unuseful for making encodes that are transparent to the source.
I mean the thing with something like SVT-AV1 is, even if it doesn’t give you the best efficiency encode, does it do a more efficient encode than your alternatives in a reasonable timeframe.
GPU video encoding is pretty much always optimised for real-time encoding, meaning that it can't run certain optimisations as it would increase the time to encode.
Compare x264 veryfast and veryslow presets. There is a quality difference at the same bitrate.
Additionally, GPU encoders don't have as many psychovisual options as CPU encoders as they would need to be included in the hardware and adding extra options to CPU encoders is much faster, easier and cheaper.
You could build a non-realtime GPU encoder, but there is not much point.
There two types of GPU acceleration: acceleration by a hardware codec bolted on a GPU, and actual GPU computational acceleration. the latter is not widely used but probably is not n gonna have any difference in terms of quality, and provides only modest acceleration.
I really like HEVC/h265. It's pretty much on par with VP9. but licensing trouble has made it difficult to get adopted everywhere even now. VVC/h266 seems to be having the same issues; AV1 is pretty much just as good and already seeing much more adoption.
Is it because they come with the codecs “jointly” with the MPEG organization, which is for profit?
If the group is part of a standards organization that’s part of the UN, I feel like they should come up with non-patent-encumbered “recommendations”.
There are multiple patent pools for H.265 wanting to be paid licensing fees (MPEG LA, HEVC Advance, Velos Media). HEVC Advance and Velos Media even want content distribution licensing fees.
And there are patent holders not in any pool who also want to be paid (Technicolor, Motorola, Nokia, etc.).
LOL, You didn't even bother answering the question. I will let others be the judge. But it is rather obvious from your other reply about your knowledge on video codec.
>Look, your claim is that selling movies on discs is not content distribution. But it clearly is.
1. No where did I make the claim selling movies on discs is not content distribution. It is however NOT ALL content distribution, which is what you implied. And Physical Media licensing has been is own category since MPEG2 as used in DVD.
2. You claim of content distribution does not include Streaming and Broadcasting. Which has been the debate of the HEVC / H,266 licensing programme during its early stage of IP Negotiation terms. If you dont know H.265's history on patent terms then my suggestion is that you read up on it. As it is often the FUD pointed towards H.265. And you are, knowing or not, repeating it.
3. >You're just wrong. Don't worry about it.
This is HN. Not Reddit. If you dont know something, ask. Stop pretending you know something that you dont.
There is no uncertainty or doubt. The licensing of H.265 is well understood to be terrible.
You yourself are the perfect demonstration of this. You plainly don't know the terms of the licensing for all the different patent pools and individual licensors, and you've contradicted yourself multiple times in this thread. Let's review:
1. First you claimed there were no content distribution fees. That's false.
2. Next you claimed that "content distribution" only refers to streaming and broadcasting. That's false.
3. Then you claimed that "content distribution" does indeed include selling movies on discs. That's true and it means we're back at Claim 1.
Watching you flail about trying to save face is pretty funny, but more importantly it is the clear demonstration that you don't understand the terms of H.265 licensing which is, of course, why the licensing is so terrible.
One of the reasons AV1 is more usable is because its licensing is simpler, clearer, and more straightforward. You've proven that.
peutetre was clear from the start, ksec claimed that content distribution fees are not for physical media and to defend his position he asked "Do HEVC Advance and Velos Media currently, or has ever charges for Broadcasting and Streaming?" which makes no sense, since the specific topic was about physical media.
The specific was not about Physical media. The specific was that All content distribution requires fees. Physical media was not even specified in the original context.
Peutetre referred to physical media when talking about content distribution licensing. You then used a non-sequitur to argue that physical media don't fall under this category, so I will be siding with them.
To be honest I don't really care about pointless internet arguments. I think it would be more intellectually interesting if you posted your issues with AOM instead (who I happen to also dislike)
I've always felt like H.264 hit a great sweet spot of complexity vs compression. Newer codecs compress better, but they're increasingly complex in a nonlinear way.
Doesn't every state of the art codec do that, including H.264 versus previous?
Also I haven't seen charts of decode difficulty, but as far as encoding goes, codecs a generation more advanced can crush H.264 at any particular level of effort. The even newer codecs can probably get there too with more encoder work. https://engineering.fb.com/wp-content/uploads/2023/02/AV1-Co...
Generally yes, but I wasn't only talking about compute costs. Mentally grokking H.264's features and inner workings is a lot easier than newer, better codecs.
H.262 has a long future too. New DVDs and OTA broadcasts (at least for ATSC 1.0) will probably end at some point, but the discs won't go away for a long time, even if disc players are less and less popular.
H.262 == MPEG-2 Part 2 Video == ISO/IEC 13818-2. Video codecs are fun because multiple standards bodies publish their own version of the same codec. The ITU-T publishes the H.26x specs, and ISO/IEC publish the other xxxxx-x specs.
Pesonally I like the ITU-T because you can get the specs for free from them. ISO/IEC demand $242 for a copy of ISO/IEC 23008-2 (also known as MPEG-H Part 2 or HEVC). But ITU-T will give you H.265's spec (which is the same codec as ISO/IEC 23008-2) for free: https://www.itu.int/rec/T-REC-H.265
This is true, but I think I settled on either VP8 or VP9 because it's already widely supported, and it's part of webm, so people will maintain support just for backwards compatibility.
There are an additional two patents on that list that expire in 2028 and 2030, but it's not clear if they apply. So probably November 2030 at the latest.
It also benefits from the extremely optimized encoder x264, which has many easily approachable tunings and speed/compression presets.
I’m not sure I’d trust x265 to preserve film grain better than x264 —tune film unless I gave x265 much, much more time as well as pretty similar bitrate.
Uhh, not in my experience. Its extremely difficult to find h265 sources for a large majority of content. Its basically a meme where if you tell people you want to try and get h265 by default, they talk down to you like "why would you do that".
That info seems a little out of date. The utility of h.265 at resolutions below 1080p can be questionable, but everything these days is 1080p or 4k.
Hardware support has also been ubiquitous for a while now. iPhones have had it since 2016. I usually "move on" to the next codec for my encodes once I run out of hardware that doesn't support it, and that happened for me with h.265 in 2019. My iPad, laptop, and Shield TV are still holding me back from bothering with AV1. Though with how slow it is, I might stick with h.265 anyways.
The guide isnt saying 95% of source content is h264. Its saying 95% of files you would download when pirating are h264. The scene, by and large, is transcoding h265 4k to h264 720/1080. The 4k h265 is available but its considered the 'premium' option.
Most 1080p encodes I see on pirate bay are h.265 these days.
But frankly, most of what "the scene" produces it trash. I gave up on waiting for proper blu-rays for the remaining seasons of Bojack Horseman and pirated them a few weeks ago, and all the options were compromised. The best visual quality came from a set that was h.265 encoded with a reasonable average bitrate, yet the quality still did not reflect the bitrate at all, with obvious artifacting in fades and scenes with lots of motion. I usually get much better results at only slightly larger file sizes with my own blu-ray rips.
I'm pretty sure the key difference is that I go for a variable bitrate with a constant quality, whereas most scene groups want to hit an arbitrary file size for every episode regardless of whether or not that is feasible for the content. Every episode in this set is almost exactly 266 MB, whereas similar shows that I rip will vary anywhere from 200 to 350 MB per episode.
Yeah, tv shows are tough. I should caveat that I'm talking mostly about movies. I find a _lot_ more 265 content for tv shows. I also am generally on usenet rather than torrenting.
> It relies on decoders exposed by the host OS, negating the need to license a software decoder.
Windows requires you to pay (!) to get access to h265 decoding. Someone needs to license the decoder at some point, and that means you'll have to pay for it one way or the other.
But the GPU vendors already provide this directly (Nvidia, Intel, and AMD). It can absolutely be done at no extra cost to the user. There are also open source software decoders that can be installed and play nice with WMF.
> But the GPU vendors already provide this directly (Nvidia, Intel, and AMD). It can absolutely be done at no extra cost to the user. There are also open source software decoders that can be installed and play nice with WMF.
Somehow I doubt that Nouveau or the in-kernel AMDGPU have paid the license fee for HEVC decoding, or that it works well with WMF...
We're just whittling down to a smaller and smaller subset of users. 99.9% of users shouldn't be made to go without just because 0.1% of users can't have it.
Though even then, ffmpeg is open source and decodes hevc just fine. I get why browser vendors would not want to bundle ffmpeg, but that shouldn't stop them from leveraging it if the user already has it installed.
> Uhh, not in my experience. Its extremely difficult to find h265 sources for a large majority of content.
Sounds like you need some new sources. For now content I generally see x264 and x265 for just about everything. 264 isn't going anywhere because many older design set top boxes (including those still running Kodi on Pi3s) don't have hardware decode support, but 265 has become the default for 4K (which older kit doesn't support anyway) with 1080 & 720 being commonly available in both.
Admittedly some of the 265 encodes are recompressions of larger 264 sources and sometimes bad settings in either encode (including just choosing an already overly crunched source) show through, but that isn't common enough to be problematical (my usual complaint is that encodes like that strip subs, though that is useful: avoiding 265 encodes with no subs avoids the other issues mostly too).
Few are going back and making 265 encodes for older sources, so that may be an issue you've faced, but if that content has a new BR release or other re-release (like when B5 got a slightly remastered version on one of the streamers) then you'll get both formats in the common resolutions.
I mean I've gone all 4K/HDR a number of years ago, and all streaming 4K and disc 4K is HEVC. So all the piracy content is WEB-DLs or UHD disc rips of that and it is often not re-converted, but even if it it tey still use HEVC.
H.264 is a fine codec, but it doesn't support HDR and also really starts to show its weakness in efficiency at 4K resolution. At 1080p SDR it fairs much better vs HEVC.
I think the big switch will happen once most people have TV's that can do 10bit color and true HDR. H264 can definitely do 10 bit but I'm not sure how well it handles higher dynamic range content (or if it even can) but H265 definitely can.
I feel like the switch from AVC (h.264) to HEVC (h.265) has already happened. It's used in 4k blu rays, most premium streaming services, and hardware support has been ubiquitous for at least 6 years.
Again, depends on your source content. Obviously 4k/HDR is H265. Plenty of shows are still 1080p source though even today. Not the 'flagship' shiny ones, but there's a lot more 1080p max source content. And for those, the proper format is H264, and still is what is released today.
MPEG 5 EVC Baseline, arguably the improved / refined version of H.264, actually compress better while having the same complexity, Both in terms of mental model and computational cost.
> Well OK, but what about H.265? It's one louder, isn't it?
I had some old, gigantic, video footage (in a variety of old, inefficient, formats at a super high quality).
So I did some testing and, well, ended up re-encoding/transcoding everything to H.265. It makes for much smaller files than H.264. The standard is also ten years younger than H.264 (2013 for H.265 vs 2003 for H.264).
Years ago I was photographing a party in silicon valley (had just graduated college and was taking any job I could get). I take a nice photo of a couple and the wife looks at me and says " You know my husband invented H264? Do you know what that is?".
Ended up having a long conversation with the husband (the engineer) and he hinted many times that they were working currently on something even better. Years later now of course we have H265. Everytime I play something in H264 or H265 I think of that moment and how much work that guy and his colleagues must have put into coding such a widely used codec (and how proud his wife was!)
Yes, from 1993 until I retired in 2011. I worked at C-Cube Microsystems and LSI Logic.
In 1993-1994 at C-Cube, I helped develop the first real-time MPEG-1 encoder. The rest of the team was busy developing the 704x480 resolution encoder for the roll-out of DirecTV, so I ended up taking sole responsibility for the MPEG-1 product and brought it to market. It's thought that this encoder authored much of the content for China VCD.
One notable thing I remember was watching the OJ Simpson chase on a test DirecTV receiver in the C-Cube lab (the same day that DirecTV went live).
Here's a picture of the C-Cube chip. It took 12 of these running in parallel to encode 704x480 MPEG-2.
VVC is a generational improvement in compression ratio over AV1, which might make it the best for a lot of applications. Too bad it will probably languish in patent licensing hell.
> VVC is a generational improvement in compression ratio over AV1
That's probably true, but do you know where I could find a comparison that shows a variety of speed settings for both codecs? And ideally discusses how fast VVC is expected to encode once everything is more finalized.
It's easy enough to run everything at default speed but then the results are almost meaningless.
It's always really slow at the start, but once hardware acceleration happens it becomes much less of a problem.
Or if you are just an end user consuming streaming video content, then you only care that your player can decode it which I don't think will be any sort of problem if things play out like they did for HEVC vs VP9.
If all the streaming services choose VVC over AV1, then all the player companies will put hardware decoders in their SoC.
Well people definitely had complaints about HEVC compared to H.264 and VP9, but it seems that HEVC did perfectly fine.
I deal with a lot of video and HEVC is by and far the most used codec that I see.
I am not sure why AV1 vs VVC will go any differently.
Just like HEVC was about a generation better than VP9, VVC is about a generation better than AV1.
I guess it depends on what streaming companies choose. Since they all chose HEVC, all the cheap player chips in TVs and boxes and sticks all have HEVC hardware decode. Even though YouTube chose VP9, it didn't really seem to make a difference in the grand scheme of things.
YouTube chose AV1 of course, but if all the streaming companies go VVC, well then I think it will proliferate similarly to HEVC.
Unless you need hardware acceleration. I’ve managed to implement live h265 encode and decode with Directx12 on Windows on Intel and Nvidia, but the driver crashes on AMD and they seem uninterested in fixing it, so there I have to fallback to h264.
Maybe I need to change some options, but I find whatever encoder I get when I ask ffmpeg to encode h265 takes a very long time. Then decoding 4k h265 is very slow on most of my PCs.
https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding