It is everywhere. Even on birthday invites for my kids there's nonsense from an LLM. At work I review PRs with code that doesn't even run. Doing research is harder than ever as more and more references are completely made up.
We're too lazy and too obsessed with getting ahead to use this technology responsibly in my opinion.
How do those PR authors react when you point out that the code doesn't run and block the merge? Any signs of them improving their work ethic over time based on your feedback?
Well they then use more AI to try to fix the PR, which leads to many more rounds of the same. It's like I'm coding using an AI except through a real person who mangles the prompt. I've had some success as well in talking people out of it, but it feels like I'm gonna lose eventually
Of course they are providing that feedback! No one gives a shit though. Our industry and society at large has basically given approval for people to submit AI slop. Managers and executives consider it working smart and efficiently. So telling someone "this code doesn't run" results in more slop in an attempt to fix it. Eventually it will run and get merged and the code base gets even shitter. There's only so much gate keeping and quality control the few people who actually give a damn can be expected to do when swimming against the tide. Mental health is a thing. And to quote Dan Ashcroft, the idiots are winning.
Accepting a little exaggeration (“works” vs “works well”), for a segment, almost certainly.
Particularly when they know that people like the commenter above are making sure it ultimately “works” by covering the incompetence of their colleagues.
The comment you are replying to is in my view a superb observation of the challenge of maintaining quality against systemic pressures to appear to be performing.
Most senior leaders in organisations cannot (or care not to) measure quality. Few (outside big tech I assume but wouldn’t be surprised to also see overlook this?) are even usefully measuring benefits realisation tied back to activity (such as software releases).
What they can measure and are systemically incentivised for is “what does it take to get the approval of the next leader above me”, and most of the time a plausible report that the software has been delivered to/ahead of schedule is the real objective to achieve this goal.
That doesn’t mean morally motivated managers aren’t out there driving quality. But it is at odds with these org systems, at the detriment (or risk of detriment) of their own careers compared to peers who optimise more for what the system rewards, and at the expense of greater energy as they effectively have to hide their pursuit of better outcomes for the organisation under a veneer of performing as the organisation expects (that is, serve two goals simultaneously, one covert and one performative).
Savvy researchers/engineers have an opportunity to arbitrage here: working without LLMs on something hard leads to better outcome than what your "AI-enabled" peers achieve (after all, Karpathy could not resort on any AI to build nano-chat). It's sad state of affairs, but it really is there.
We're too lazy and too obsessed with getting ahead to use this technology responsibly in my opinion.