Oh funny, I forgot about that. But at the time it didn't seem unreasonable to withhold a model that could so easily write fake news articles. I'm not so sure it wasn't..
Yeah Gemini seems to have a sense of humor about the question
> Here is the breakdown of why:
The Mobility Problem: Unless you are planning to carry your car 50 meters (which would be an Olympic-level feat), the car needs to be physically present at the car wash to get cleaned. If you walk, you’ll be standing at the car wash looking very clean, but your car will still be dirty in your driveway.
I suspect there's a bit of a chance to learn from history here. It was predicted that radiologists would get put out of the job by AI tools. But this didn't happen, largely because trust and liability matter just as much as the service itself.
This isn't a counterargument to the idea that AI is going to kill a lot of app subscriptions, but it tells us about what kinds of apps will get killed, and what apps will have staying power. Ironically, the flood of cheap, low quality AI generated apps might make it harder for cheap apps to overtake bigger players, because overall trust in the ecosystem will go down.
The areas where errors are tolerated with human / "classical algo" fallback is the best field to disrupt with AI. Call center jobs. Recommendations. Search, Curation. Wherever the current process is already stochastic and has a human or rule-based correction loop, AI just needs to be cheaper and roughly as accurate to win.
It's early days yet. You'd have to be crazy to go into radiology, if you were just starting out as a freshly-minted MD. And yes, this is going to be a problem, possibly a big one.
People were saying that a decade ago, if not more, with machine learning outperforming radiologists. Today we don't have enough radiologists. What has changed since those days that one can again say that so confidently?
The difference is that the models now work. I don't know if you've noticed, but a lot has happened since "a decade ago."
Some of the recent studies showing otherwise are literal junk science. They used flawed methodologies, such as blindly feeding the data to the models without access to medical histories that the human doctors would have benefited from (which favored the human radiologists substantially) and leaving labels on the training data (which favored the models.) When studies are conducted under more rigorous standards, the humans don't tend to win, and unlike the models the humans will not get better.
Video game AI from 20 years ago was already good enough to perform ATC at an algorithmic level. It's not a difficult problem, if approached as a system, because there are no surprises left in the field. Every situation that will ever need to be handled by ATC has either been encountered and understood by now, or can reasonably be anticipated and modeled, making it a good candidate for automation.
The problem -- like self-driving cars that generally outperform humans but aren't allowed on the roads because occasionally they still screw up -- is primarily political, not technical. A rational point of view wouldn't demand perfect performances from machines that were never achieved by humans in the first place. Eventually lawmakers and regulators will get that through their heads, and we'll all be better off for it... but the transition will be a rough one.
I was wondering the same thing. After I posted above, I followed the archive.org link to the original article and did a quick search on the last four quotes, which the article claims are from Scott's blog. None appear on the linked blog page. The first quote the article claims is from Scott does appear on the linked Github comments page.
When I wrote my post above, I hadn't yet read the original article on achive.org. Now that I know the article actually links to the claimed original sources on Scott's blog and Github for all the fabricated quotes, how this could have happened is even more puzzling. Now I think this may be much more interesting than just another case of "lazy reporter uses LLM to write article".
Ars appears to use an automated tool which adds text links to articles to increase traffic to any related articles already on Ars. If that tool is now LLM-based to allow auto-generating links based on concepts instead of just keywords, perhaps it mistakenly has unconstrained access to changing other article text! If so, it's possible the author and even the editors may not be at fault. The blame could be on the Ars publisher's using LLM's to automate monetization processes downstream of editorial. Which might explain the non-standard vague retraction. If so, that would make for an even more newsworthy article that's directly within Ars' editorial focus.
It's so strange, when I switched to Android, it felt like iPhones had more polish and fewer glitches like these. But there's just aren't any usability bugs that I deal with on Android anymore.
I think the blog explains why people put up with such a buggy OS:
> I randomly tried Android again for a few months last spring. Using a functioning keyboard was revelatory. But I came crawling back to iOS because I'm weak and the orange iPhone was pretty
I'm surprised to see that the valence of comments here is mostly negative. Nima Arkhami-Hamed is one of the top living physicists, and he has nice things to say about the work. The fact that researchers can increasingly use these models to (help) find new results is a big deal, even considering the caveats.
The probably with 2 is you then need someone to be the arbiter of truth, and the truth is often a hard thing to find. This would end up letting governments jail people they disagree with. How would you write the law to to prevent that?
The whole point of a court is to find truth. They do it all the time. Actually you would need to prove someone knew something is untrue, because it's innocent until proven guilty. You wouldn't have to prove what you said is true to get let off, just bring enough doubt to ward off your opponent's accusation of untruth.
I don't get what you mean? proving whether you've done something against someone's interest is already on the books for embezzlement, fraud,etc.. intention and planning are covered under many conspiracy laws. the influence part would need to be proven using internal documents, whistleblowers,etc..
NBC alone had over 20 million opening ceremony viewers this year (TV + Peacock streams)[0]. TV is a huge huge part of US culture. See also the Super Bowl, which just happened today.
reply