When I mentor new devs, I explain to them how I use git. Sometimes I show them the workflow in magit, which makes it easier to visualize things. But mostly I just show them how their intended actions map onto the relevant CLI commands and I tell them to figure out how those map onto their porcelain of choice. I've developed this intuition thanks to magit, but I don't think magit is necessary. This approach seems preferable to me than onboarding new devs onto a new tool that is not the industry standard.
> This strikes me a lot like the C vs. safer programming language debate all over again.
I don't see how. Safer programming language address a clear problem in C, with trade-offs (sometimes arguably the trade-offs may not be worth it, and in my experience that's what the debate tends to be about). If jj is a replacement for git it should be clear what problem within git it aims at addressing. If the problem is in the UX, then to me and many others it's not worth the trouble.
> Then I went back to git (it's been 6 months now) and I haven't had a single case of "this is so painful, I wish something better existed".
The core issues are: how long did it take you to get there, how many lucky decisions did you have to make to not run into git footguns, and how many other people accidentally made different choices and so have very different experiences from you?
What you're saying is that other people may find jj easier for them, right?
I am fine with that. I am just saying that the "you should use jj, you will finally stop shooting yourself in the foot regularly" doesn't work so well for me, because I don't remember shooting myself in the foot with git.
> A missing specification in the proof of lean-zip, a lean component, is a real problem to the philosophy and practice of software verification.
Every time someone makes this point, I feel obliged to point out that all alternatives to software verification have this exact same problem, AND many, many more.
ROCm is so annoying (buggy, fiddly dependencies, limited hardware support) that TinyGrad built its own compiler and toolchain that targets the hardware directly. And it has broader device support than ROCm, which primarily seems focused on their datacenter GPUs.
The TinyGrad approach of going straight to the hardware is telling. Between that, Vulkan compute getting faster for inference (llama.cpp Vulkan backend is competitive now), and SYCL/oneAPI, it feels like the real threat to CUDA might not be ROCm at all but a fragmented set of alternatives that each bypass AMD's broken software stack entirely.
For Jevons paradox to be a win-win, you need these 3 statements to be true:
1)Workers get more productive thanks to AI.
2)Higher worker productivity translates into lower prices.
3)Most importantly, consumer demand needs to explode in reaction to lower prices. And we're finding out in real-time that the demand is inelastic.
Around 1900, 40% of American workers worked in agriculture. Today, it's < 2%.
Which is similar to what we see with coding: The increase in demand has not exploded enough to offset the job-killing of each farmer being able to produce more food.
If an OpenAI model helped someone create a cancer cure, they wouldn't see a dime from that beneficial act. So why should they be liable if someone does something harmful with the model?
If an OpenAI model helped someone create a cancer cure I guarantee that they would try to profit as much as possible from that fact. They have even talked in the past about having partial ownership over discoveries made with AI be part of the license. They would be all over that.
I'm sure if they could, they would, sure, as would any business. That's where competition enters the equation. They can't do it because their competitors would undercut them by requiring no such conditions.
Sure they would, just like people would use the bad PR to smear OpenAI if someone did something bad with knowledge their model created. The situation is totally symmetrical and fair as it is, and my point is that expecting them to liable is asymmetric and unfair. If they can be held liable, then they should also be able reap the rewards in order to offset those risks.
This is what I'd expect from companies - I don't see why Facebook would get money because they helped people connect to each other who ended up developing a cancer cure, but they definitely should be held accountable for enabling a genocide. You're allowed to operate a business until you cause harm to society, then we can shut it down.
I think the big thing you would need is to see the internal emails - if there was ever a case where someone raised a concern about this possibility and it wasn't taken seriously, then they should be liable. If they just never thought about it then it could be negligence but I think if I was on a jury I'd find that more reasonable than knowing it could be a problem and deciding you aren't responsible
> I don't see why Facebook would get money because they helped people connect to each other who ended up developing a cancer cure, but they definitely should be held accountable for enabling a genocide.
Why? What does it even mean to "enable a genocide"? Just saying something isn't an argument.
> if there was ever a case where someone raised a concern about this possibility and it wasn't taken seriously, then they should be liable.
Again, why? How is this any different than electricity as a tool, which has both beneficial and harmful uses? AI is knowledge as a utility, that's the position here.
Making knowledge illegal is a dangerous precedent. Actions should be illegal, not knowledge. Don't outlaw knowing how to make neurotoxic agents, outlaw actually trying to make them.
As for OpenAI immunity, I'm not sure I see the problem. Consider the converse position: if an OpenAI model helped someone create a cancer cure, would OpenAI see a dime of that money? If they can't benefit proportionally from their tool allowing people to achieve something good, then why should they be liable for their tool allowing people to achieve something bad.
They're positioning their tool as a utility: ultimately neutral, like electricity. That seems eminently reasonable.
> 1. LLMs don't just provide knowledge, they provide recommendations, advice, and instructions.
That's knowledge.
> 2. OpenAI very much feels that they should profit from the results of people using their tools. Even in healthcare specifically [0].
If they're building a tailored tool for a specific person/company and that's the agreement they sign the people who are going to use with the tool, sure. I'm talking about their generic tool, AI being knowledge as a utility, which is the context of this legislation.
The point is valid, but that's typically the way it is. "You can't enjoy the benefit but the detriment is all yours" is how the federal government generally operates.
reply