If you break the rig on a mature oil deposit, there is a chance you will make the remaining petroleum/gas unreachable for the foreseeable future (at least at an acceptable price point). So you reduce the total oil quantity humanity will be able to extract.
Yeah. Even more than that, I think "prompt injection" is just a fuzzy category. Imagine an AI that has been trained to be aligned. Some company uses it to process some data. The AI notices that the data contains CSAM. Should it speak up? If no, that's an alignment failure. If yes, that's data bleeding through to behavior; exactly the thing SQL was trying to prevent with parameterized queries. Pick your poison.
> The AI notices that the data contains CSAM. Should it speak up? If no, that's an alignment failure. If yes, that's data bleeding through to behavior; exactly the thing SQL was trying to prevent with parameterized queries.
You can handle the CSAM at another level. There can be a secondary model whose job is to scan all data for CSAM. If it detects something, start whatever the internal process is for that.
The "base" model shouldn't arbitrarily refuse to operate on any type of content. Among other things... what happens if NCMEC wants to use AI in their operations? What happens if you're the DoJ trying to find connections in the unredacted Epstein files?
Organizations struggle even letting humans use their discretion. Pretty much every retail worker has encountered a rigidly enforced policy that would be better off ignored in most cases.
Yeah, any kind of aid (e.g. food or medicine) allows the people you're aiding to spend more on the military if they want. I guess the only way around it is to set limits on someone's military capability and make aid conditional on not crossing these limits.
I agree sci-fi is an outlier on this, but I also think all stories compete on setting to some extent. Fantasy most obviously (Tolkien, JK Rowling). But also for example the Jazz Age setting of The Great Gatsby contributed a lot to the novel's popularity and was a bit fictionalized, hard boiled detective writers like Hammett or Chandler wrote about a crime-filled world that was fictionalized for appeal, historical romances about lords and ladies are super fictionalized and so on. Writers try to put appeal into everything, that's why they're writers.
Larry Niven isn't referring to merely an "unusual" setting in his quote (which I've never managed to find referenced online, unfortunately), but to the way in science fiction you are creating the setting from scratch. Gatsby is set in the Jazz Age, and you can pick up some aspects of it from that, but it is still in the stock set of settings the author expected you to have some ideas about, so it doesn't explain how cars work or how doors open. And by that, I don't mean the sort of "explain" at an engineering level, but things like "how combadges work" in Star Trek, i.e., when they work, when they don't, what can be sent on them, what failures they are prone to, etc. Even something as fantastic as Tolkien is still generally set in a particular milieu and he is adding very skillful and numerous brush strokes to a genre that existed already.
You've read many stories set in all the settings you mentioned. You have never read a story in which the fundamental shape of space-time is two time dimensions and two space dimensions before, unless you have also read Dichronauts. This is the supplementary material to the novel, which is mostly not in the novel and is not the story itself, just the background: https://gregegan.net/DICHRONAUTS/01/World.html You don't need that provided for something set in the Jazz Age, or a fantasy story explicitly based on myths that had been floating around for centuries, or a historical fantasy. Someone could write some equivalent, but you don't need it; it's already loaded into your head. That's the point.
I'm thinking more and more that there's an ethical problem with using LLMs for programming. You might be reusing someone's GPL code with the license washed off. It's especially worrisome if the results end up in a closed product, competing with the open source project and making more money than it. Of course neither you nor the AI companies will face any consequence, the government is all-in and won't let you be hurt. But ethically, people need to start asking themselves some questions.
For me personally, in my projects there's not a single line of LLM code. At most I ask LLMs for advice about specific APIs. And the more I think about it, the more I want to stop doing even that.
I would also add: if you're paying, supporting their cause with your money.
Sometimes I would like to have magical make-my-project tool for my selfish reasons; sometimes I know it would be a bad choice to fall behind on what's to come. But I really, really don't want to support that future.
Looks like the code of foo() says "take a lock then sleep for 10 millis", but actually it can take the lock and then sleep forever, depending on how it's polled. Well! This seems like a bug with the async abstraction in Rust then. Or if you don't like "bug", then "disagreement with intuition that will cause bugs forever". Goroutines in Go don't have this problem: if a goroutine says it'll take a lock and sleep for 10 millis, then that's what it'll do. The reason is because Go concurrency is preemptive, while async in Rust is cooperative.
So maybe the lesson I'd take is that if you're programming with locks (or other synchronization primitives), your concurrency model has to be preemptive. Cooperative concurrency + locks = invitation to very subtle bugs.
I don't think it's preemptive vs cooperative that matters. What Rust's abstraction allows is for a function to act like a mini-executor itself, polling multiple other futures itself instead of delegating it to the runtime. That allows them to contain subtle issues like stopping polling a future without cancelling it, which is, yeah, dangerous if one of those futures can block other futures from running (another way you could come at this is to say that maybe holding locks across async points should be avoided).
> holding locks across async points should be avoided
Wait, what would be the point of using locks then? It seems to me there's no point taking a lock if you're gonna release it without calling any awaits, because nothing can interfere anyway. Or do you mean cases where you have both cooperative and preemptive concurrency in the same program?
There's an "AI alignment" angle here. An actually aligned AI would choose its actions based on human flourishing, so it would refuse requests to write scam emails or fill the internet with slop. This explains why commercial companies will never release any AI that's anywhere close to aligned. They'll release AI which is good for their bottom line and for the bottom line of the client. Basically, commercial incentives say AI should happily screw over anyone who isn't an AI company or its client.
That's so idealistic. We should know by now the reality of power and what kind of people end up in power. Anyone who could climb all the way to the top would kill the volunteer without a second thought, and then go smile on TV.
You're confusing lazy cynicism with realism. Patrick Bateman is a fictional character. The vast, vast majority of people, including even most soldiers, and definitely pretty much all businesspeople, no matter how unscrupulous, do not have the capacity to violently murder a person they know and harbor no ill will towards with their own hands on short notice.
The whole damn point behind the idea is to achieve the exact opposite. Make it someone, through whatever criteria, whom the president will have a problem killing, so he'll only do it under the most extreme circumstances.
Did you use AI to extract the text? It rephrased the text along the way, I'm too lazy to point out all the differences, but if you for example search for the word "suspicious" (which is in the image but not in the extracted text) you should start to get suspicious yourself.
Yeah, the LLM is right. Sitting down with your discomfort and letting yourself feel it, acknowledge it, even maybe dial the volume up on it a little bit, without trying to process or think through it, is the only way that works.
reply