> A technical standard may be developed privately or unilaterally, for example by a corporation, regulatory body, military, etc.
PDF is now an international standard (ISO 32000) but it was invented by Adobe. HTML was invented at the CERN and is now controlled by W3C (a private consortium). OpenGL was created by SGI and is maintained by the Khronos Group.
All had different "ownership" paths and yet I'd say all of them are standards.
My guess is that the researchers don't "own" the research they're doing since it was being funded by the university. It's property of the university and if you take it and try to sell it now you're stealing IP.
It's been awhile since I looked, but when I did a few years ago they had hundreds of subdomains for their videos (both regular and ads) and would rotate what they're used for all the time. That's why its basically impossible for PiHole to anything.
Google does some stupid things, but Chromium is "too big to fail" at this point and it's too essential to products like Android which are also at that same point.
But hypothetically if Google stopped contributing to Chromium the project would be forked and it would live on. Frankly, Google removing themselves from Chromium would fix the one major complaint a lot of people have with it.
I wonder how that works if Google wants to kill the forks. There are a bunch of important components like widevine that can be withheld to put pressure on the project.
WebP is obsolete. It's still based on VP8 codec, which in video has been replaced by VP9 long time ago. AVIF is based on AV1, which is a successor to VP10. So WebP is a few generations behind in the VPx lineage, and is no match for modern codecs.
AVIF was a quick hack originally by Netflix by placing an AV1 frame into a HEIC container. I believe it was done in a few weeks of work.
AV1 was largely based on VP9/VP10 and was developed by a team working in Chrome organization.
JPEG XL main mode (VarDCT) and the JPEG recompression is largely developed by Google Research.
WebP as a format was based on VP8, a video codec built by On2 Technologies. On2 was bought by Google in 2010 -- a year before Google published WebP. The transparency and lossless encoding as well as non-video keyframe-by-keyframe animation were designed at Google. The On2 VP8 codec used initially in WebP lossy was not that suitable (too many artefacts) for photography transmission. Jeff Muizelaar wrote a great blog post about this. The codec for WebP were redesigned (without format) changes at Google, and kept improving significantly until around 2015 when it reached pretty good maturity.
(Personally, I don't like what it does to highly saturated dark colors, such as dark forests or dark red textures, but it is much much better than it was.)
WebP was the classic Google “ship the prototype” move - they were hoping Chrome could muscle it through but it delivered only modest compression improvements (10-15% real world - the marketing papers promised 30% based on comparisons to unoptimized JPEGs) but it was missing features and had very primitive software support, making it harder to produce, deliver, or share (when Facebook switched, a common complaint was someone downloading a picture and finding it didn’t work in another app or when they sent it to a friend).
Very few sites pay for so much outgoing image bandwidth to make that compatibility cost lower than a 15% savings.
it's still a version of jpeg that supports transparency, and it's actually well supported (down to ios 14) to use without also having to deliver a fallback format now. it's not as good as it's successors, but if you are chosing one format and care about image size, it's the best choice.
I’m not saying it was terrible but that it took a long time for it to be worth the trouble unless you really needed transparency. It’s only been the last year or so that you could expect to be able to use it for anything non-trivial and not spend time dealing with tools which didn’t support it.
it did, but mostly because safari dragged its feet for years. Thank god they didn't take this long for AVIF (though I would have loved it if they had shipped AVIF in io15, since a bunch of devices won't get ios16)
Safari was the least of it - most image processing tools didn’t support it (e.g. PhotoShop got support last year, Microsoft Paint was the year before) or you had to do things like recompile the versions of common tools to add support (again, better now but it takes a while for support to spread through Linux distribution releases), and now you have more security exposure. That was a lot of hassle for very modest compression gains.
AVIF has gone better because it wasn’t based on a video codec which was never really competitive, was developed collaboratively, and didn’t have feature regressions from JPEG. As with tool support, that last matters a lot at many organizations because the edge experience tends to decide things - even if 95% of your usage is boring 8-bit 4:2:0 the institutional memory tends to be shaped by the times you hit something which can’t be used without more work. If it compressed as well as AVIF, more people might have decided WebP was worth it but since it only marginally outperformed JPEG the case was never that strong.
Part of what I meant by “shipping the prototype” was this kind of stuff: someone at Google wanted to find another use for the On2 IP they’d purchased so they tossed it into a 20 year old container format and shipped it. As with WebM, the benchmarks were fast and loose which meant that anyone who replicated them saw substantially lower performance, which is another great way not to build confidence in your format.
I switched to WebP about a year and a half ago. I’d been watching for a long time and it had finally reached the point where support was universal enough that I could not publish a JPEG.
WebP has the big advantage that the quality setting is meaningful, you can set it at a certain level and then encode thousands of images and know the quality is about the same. This is by no means true about JPEG, if you are trying to balance quality and size you find you have to manually set the compression level on each image. Years back I was concerned about the size of a large JPEG collection and recompressed them which was a big mistake because many of the images were compressed too hard.
In 2023 I think you can just use WebP and it will work well, my experience looking at images is that AVIF does better for moderate to low quality images but for high quality images it doesn’t really beat WebP.
> Years back I was concerned about the size of a large JPEG collection and recompressed them which was a big mistake because many of the images were compressed too hard.
Distortion metrics[0] such as MS-SSIM, MS-SSIM*, SIMM, MSE, and PSNR can be used to define a cut-off or threshold for deciding the point at which the image is "compressed enough" by using one or more of those algorithms and predefining the amount of acceptable/tolerable distortion or quality loss. Each of those algorithms has some trade-offs in terms of accuracy and processing time, but it can definitely work for a large set if you find the right settings for your use-case. It is certainly more productive than manually settings the Q-level per image.
Actually this isn't accurate, Kraken.io doesn't use any SSIM-related algorithms, it just blindly applies some standard compression regardless of the image's content.
You can use jpegli for better jpeg compression heuristics. It uses custom heuristics that were originally used in JPEG XL, then copied over and further optimized using nelder-mead to minimize distortion metrics (butteraugli and simulacra 2)
There is plenty of wrong or misleading information here:
- What is the cross supposed to mean for PNG compression of photographic images? PNG can compress photographic images just fine and for some applications (where you want lossless) it used to be a good choice.
- PNG has animation support, even if everyone except Firefox tried to
- While WEBP and AVIF support lossless compression and animation, those features are not available in all browser versions that support static lossy webp/avif images.
a) This is a property of the encoder, not of the image format.
and
b) Like any other video codec-based format, webp overcompresses dark areas in images so no, you can't rely on consistent quality across collections of arbitrary images.
I think it's going to burst regardless. Plenty of companies have fully embraced the remote model, and many more are only doing hybrid. Even if they don't completely sell off their buildings they're not going to use as much. One company in my city is already closing their HQ and reorganizing their other offices: https://www.cleveland.com/news/2023/07/progressive-to-close-...
It had just started to become profitable right before Elon bought it. Granted that’s like 1 or 2 profitable quarters out of dozens, but still counts I guess
A few weeks, at least. By that point, nearly every user should have gotten a notification that YouTube, GMail or Google Search is 100000% better with Chrome.
Why do people install those apps? Because the first time they visit that site on a new phone it tells them that everything will be better in the app, without mentioning that this is mostly true only for advertisers.
Ehh...I do think the native app often gives a better experience than mWeb.
Don't get me wrong, a really well-written PWA with fully cached assets is often almost at a native app experience level, but that's not usually the case with most mWeb implementations.
For Gmail, perhaps — I think that’s one of the best cases for push notifications on the web — but YouTube basically comes down to whether you need large offline storage and search is a non-issue.
One thing I noticed about both Google and Facebook’s apps before deleting them was that in addition to using more battery, both were slower in their apps than using the web site. That was surprising but consistent & a large part of why I ditched them.
For me, YouTube comes down to the fact that if I use YouTube through Firefox for Android, and install the "Video Background Play Fix" addon, then I can go for a walk with my phone in my pocket and listen to things through YouTube without having it pause as soon as the screen turns off.
Well, I believe you can do that if you pay for YouTube Extortion Edition or whatever it's called. Which I used to do, until too many ads still came through and I got pissed off.
(Though note: if I got to a YouTube link through anything other than going directly to the site and searching, then it would open in the app. Which would then pause when I turned off the screen. Which was irritating. Fixed by disabling the YouTube app entirely. Life is much better now.)
There are content blockers, which are not as comprehensive as Firefox or Chrome extensions but also avoid the performant and privacy concerns common in that space.
Historically there have been issues with inefficient code on pages with lots of elements or larger data sizes than expected. Since web pages are complex and ads continue to evolve, there’s this constant arms race where people run more code trying to block them and that code takes time and memory to run, and the update cycle means things are pushed out quickly.
A lot of the examples I’ve seen were unexpected situations: e.g. if you look at every <img> with a poorly-tuned regex and someone uses data: URLs, it might be backtracking pathologically on orders of magnitude longer strings. There used to be an issue with each injected CSS file being stored separately - it was fixed years ago but there was a multi-year period where people would complain about Firefox being slow, you’d ask if they had AdBlock Plus installed, and the performance issue cleared up as soon as they disabled it – the problem was an extremely large style sheet multiplied by every open tab and iframe. It was bad enough that they officially called it out on the Mozilla blog:
The other thing to remember is that a browser developer has to support every user, not just the savvy ones. You might know to stick to certain more-trusted extension but the Safari developer’s threat model has to include someone’s grandfather installing McEaglePatriotGuardElite and blaming Apple when their iPad is slow or that injected code is exploitable. Apple characteristically chose to respond to that by reducing flexibility, which is certainly a valid decision but also one people can reasonably disagree with.
So use an Invidious instance. Sure, YouTube blocks the subtitles for some of them, but if you need subtitles, it only takes a couple of minutes to find one of the new subtitles-available instance, and you're good for at least a few months.
I’m saying that it doesn’t do the web a service to break one monopoly while doing nothing about the more successful second monopoly. I want an open App Store but I also want fair competition in the browser market.