Humans are also non-deterministic code generators though. It can be possible that an LLM is more deterministic or consistent at building reliable code than a human.
Mathematicians use LLMs. Obviously, they don't trust LLM to do math. But LLM can help with formalizing a theorem, then finding a formal proof. It's usually a very tedious thing - but LLMs are _already_ quite good at that. In the end you get a proof which gets checked by normal proof-checking software (not LLM!), you can also inspect, break into parts, etc.
You really need to look into detail rather than dismiss wholesale ("It made a math error so it's bad at math" is wrong.)
Just wait out until you find out how vulnerable the average house is to robbers. The only difference with software is that we have somehow discarded deterrence and law enforcement as reasonable parts of a security strategy and keep insisting that technological defenses need to be 100% tight no matter the cost.
Perhaps people or machines will finally figure out how to make software which actually works without a need to weekly patching