Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s a theory of neural networks that states:

For every real valued function and every epsilon greater than zero, there’s a neural network (size unbounded) which approximates the function with precision epsilon.

It sounds impressive and, as I understand it, is the basis for the argument that algorithms based on NN’s such as LLM’s will be able to put perform humans at tasks such as programming.

But this theorem contains an ambiguous term that makes it less impressive when you remove it.

Which for me, makes such tools… interesting I guess for some applications but it’s not nearly as impressive as to remove the need for programmers or to replace their labour entirely with automation that we need to concern ourselves with writing markdown files and wasting tokens asking the algorithm to try again.

So this whole argument that, “you better learn to use them or be displaced in the labour market,” is a relying on a weak argument.







Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: