Are the two choices "accept that violence is unconditionally bad" and "throw a molotov cocktail at Sam Altman's house"? Because that dichotomy seems a bit... false?
I was offering a reason why dollars “losing value” is not actually a thing to worry about. If they didn’t lose value, people would hold onto them instead of spending them. Which is objectively a loss to everyone, regardless of how you decide to denominate it.
I know some proponents have AGI as their target, but to me it seems to be unrelated to the steadily increasing effectiveness of using LLMs to write computer code.
I think of it as just another leap in human-computer interface for programming, and a welcome one at that.
If you imagine it just keeps improving, the end point would be some sort of AGI though. Logically, once you have something better at making software than humans, you can ask it to make a better AI than we were able to make.
The other possibility is, as you say, progress slows down before its better than humans. But then how is it replacing them? How does a worse horse replace horses?
I said I don’t think it follows, and you certainly gave no support for the idea that it must follow. Logically speaking, it’s possible for improvements to continue indefinitely in specific domains, and never come close to AGI.
Progress in LLMs will not slow down before they are better at programming than humans. Not “better than humans.” Better at programming. Just like computers are better than humans at a whole bunch of other things.
Computers have gotten steadily better at adding and multiplying and yet there is no AGI or expectation thereof as a result.
Either the AI can do better than humans at programming, or it can't. If I ask it to make an improved AI, or better tools for making an improved AI, and it can't do it, then at best it's matching human output.
All the current AI success is due to computers getting better at adding and multiplying. That's genuinely the core of how they work. The people who believe AGI is imminent believe the opposite of that last claim.
No one is talking about AGI in this thread except you, though. The post said nothing about it. It's an absolute non sequitur that you brought up yourself.
To err is to be human. If you minimize your life to minimize negative impacts on others, you are hurting yourself (and your friends and family). If you make a mistake, learn from it and try to be better. None of us are born with the skill and knowledge to do the right thing all the time, and sometimes there is no right thing, just different tradeoffs with different costs.
The benefit that others get by you reaching your potential is greater than the risk to others of you making space for yourself to reach your potential.
It's talking about Luau (gradually typed, https://luau.org/), not Lua.
Hopefully https://github.com/astral-sh/ty will make the Python typing situation better, but absent that, Python typing is... not great. Honestly even with that it feels subjectively very finicky.
icontract or pycontracts -like DbC Design-by-Contract type and constraint checking at runtime with or as fast as astral-sh/ty would make type annotations useful at runtime
> Mypyc ensures type safety both statically and at runtime. [...] `Any` types and erased types in general can compromise type safety, and this is by design. Inserting strict runtime type checks for all possible values would be too expensive and against the goal of high performance.
I would not be screaming for blood. It is the world order he wants, and perhaps the only possible lesson in why we shouldn’t give him that world order.
How else would it know whether it has a plan now?
reply