It's not going to "trigger" mass layoffs; it'll be used as a convenient scapegoat for mass layoffs that were always going to happen anyway to make room for more stock buybacks. Business as usual. Same shit, different hat.
Lets be honest color is not solving cancer when they make money from managing cancer. you just searched openai cancer and gave me back the first result.
That's not quite how that works though. It can for example be possible that fine-tuning a model to avoid the styles described in the article cause the LLM to stop functionaing as well as it can. It might just be an artefact of the architecture itself that to be effective it has to follow these rules. If it was as easy as just providing data and the LLM would then 'encode' that as a rule, we would advance much quicker than we currently are.
In the case of those big 'foundation models': Fine-tune for whom and how? I doubt it is possible to fine-tune things like this in a way that satisfies all audiences and training set instances. Much of this is probably due to the training set itself containing a lot of propaganda (advertising) or just bad style.
Seems more like the kind of thing you would make prompts using.
I can totally see someone taking that page and throwing it into whatever bot and going "Make up a comprehensive style guide that does the opposite of whatever is mentioned here".
By your analogy human brains as also IP thefts, because they ingest what's available in the world, mix and match them, and synthesize slightly different IPs based on them.
But I'm merely telling the truth. The fact that people don't like it doesn't change the fact that software engineers are largely replaceable with AI now.
We are seeing the second order effects now that people using AI are not buying software products anymore, leading to layoff of software engineers.
reply