I suspect there's a bit of a chance to learn from history here. It was predicted that radiologists would get put out of the job by AI tools. But this didn't happen, largely because trust and liability matter just as much as the service itself.
This isn't a counterargument to the idea that AI is going to kill a lot of app subscriptions, but it tells us about what kinds of apps will get killed, and what apps will have staying power. Ironically, the flood of cheap, low quality AI generated apps might make it harder for cheap apps to overtake bigger players, because overall trust in the ecosystem will go down.
The areas where errors are tolerated with human / "classical algo" fallback is the best field to disrupt with AI. Call center jobs. Recommendations. Search, Curation. Wherever the current process is already stochastic and has a human or rule-based correction loop, AI just needs to be cheaper and roughly as accurate to win.
It's early days yet. You'd have to be crazy to go into radiology, if you were just starting out as a freshly-minted MD. And yes, this is going to be a problem, possibly a big one.
People were saying that a decade ago, if not more, with machine learning outperforming radiologists. Today we don't have enough radiologists. What has changed since those days that one can again say that so confidently?
The difference is that the models now work. I don't know if you've noticed, but a lot has happened since "a decade ago."
Some of the recent studies showing otherwise are literal junk science. They used flawed methodologies, such as blindly feeding the data to the models without access to medical histories that the human doctors would have benefited from (which favored the human radiologists substantially) and leaving labels on the training data (which favored the models.) When studies are conducted under more rigorous standards, the humans don't tend to win, and unlike the models the humans will not get better.
Video game AI from 20 years ago was already good enough to perform ATC at an algorithmic level. It's not a difficult problem, if approached as a system, because there are no surprises left in the field. Every situation that will ever need to be handled by ATC has either been encountered and understood by now, or can reasonably be anticipated and modeled, making it a good candidate for automation.
The problem -- like self-driving cars that generally outperform humans but aren't allowed on the roads because occasionally they still screw up -- is primarily political, not technical. A rational point of view wouldn't demand perfect performances from machines that were never achieved by humans in the first place. Eventually lawmakers and regulators will get that through their heads, and we'll all be better off for it... but the transition will be a rough one.
This isn't a counterargument to the idea that AI is going to kill a lot of app subscriptions, but it tells us about what kinds of apps will get killed, and what apps will have staying power. Ironically, the flood of cheap, low quality AI generated apps might make it harder for cheap apps to overtake bigger players, because overall trust in the ecosystem will go down.