I'm surprised to see so little coverage of AI legislation news here tbh. Maybe there's an apathy and exhaustion to it. But if you're developing AI stuff, you need to keep on top of this. This is a pretty pivotal moment. NY has been busy with RAISE (frontier AI safety protocols, audits, incident reporting), S8420A (must disclose AI-generated performers in ads), GBL Article 47 (crisis detection & disclaimers for AI chatbots), S7676B (protects performers from unauthorized AI likenesses), NYC LL144 (bias audits for AI hiring tools), SAFE for Kids Act [pending] (restricts algorithmic feeds for minors). At least three of those are relevant even if your app only _serves_ people in NY. It doesn't matter where you're based. That's just one US state's laws on AI.
It's kinda funny the oft-held animosity towards EU's heavy-handed regulations when navigating US state law is a complete minefield of its own.
> Because no one believes these laws or bills or acts or whatever will be enforced.
Time will tell. Texas' sat on its biometric data act quite quietly then hammered meta with a $1.4B settlement 20 years after the bill's enactment. Once these laws are enacted, they lay quietly until someone has a big enough bone to pick with someone else. There are already many traumatic events occurring downstream from slapdash AI development.
That's even worse, because then it's not really a law, it's a license for political persecution of anyone disfavored by whoever happens to be in power.
Every law is like this. Only fools and schoolchildren believe that the rule of law means anything other than selective punishment of those who displease the ruling class.
I agree that is how it currently is in the US, but I don't believe it is universally true or that nothing can be done to change it if enough people resisted.
My statement has nothing to do with contemporary politics and is not unique in the slightest to the US. For an example you are likely sympathetic to, consider the experience of Pavel Durov since late 2024.
"Every law" seems like a huge exaggeration. Assuming for a moment we agree Pavel is a victim of selective prosecution, notice they're not charging him with a clear, straightforward crime like murder, they're charging him with things like[1] failing to prevent illicit activity on Telegram, and "provision of cryptology services [...] without a declaration of conformity". Those laws seem far more prone to abuse as a tool for selective prosecution than most others. (Some of the things he's charged with don't even sound to me like they should be illegal in the first place.)
Every law in the sense of cumulatively, the ‘rule of law’ system has the same property of “Show me the man and I’ll show you the crime” that Beria’s system did.
I see this as roughly equivalent to amortized big O complexity. If I push to a vector repeatedly, sometimes I will incur a significant cost O(n) of reallocation, but most of the time it's still O(1).
Similarly, if Meta violates the law, and is infrequently fined a small fraction of their revenue by a small number of governments, in general it will not be a big deal for them.
You also have to ask "how much is the specific thing in the lawsuit worth to Meta?"
I don't know how much automatically opting everyone in to automatic photo tagging made Meta, but I assume its "less than 100% of their revenue".
Barring the point of contention being integral to the business's revenue model or management of the company being infected with oppositional defiant disorder a lawsuit is just an opportunity for some middle manager + team to get praised for making a revenue-negative change that reduces the risk of future fines.
Work like that is a gold mind; several people will probably get promoted for it.
I think time is different because it's finite. I admit I'll still opt for store brand to save a few bucks even making an engineering salary. But I'll also do something "illegal" (like parking at a metered spot without paying) to save time or otherwise do what I want and just deal with whatever financial cost incurred if I know it won't break me.
A saying I've heard is that if the punishment for a crime is financial, then it is only a deterrent for those who lack the means to pay. Small business gets caught doing bad stuff, a $30k fine could mean shutting down. Meta gets caught doing bad stuff, a billion dollar fine is almost a rounding error in their operational expenses.
Wow, it's always amazing to me how the law of unintended consequences (with capitalistic incentives acting as the Monkey's Paw) strikes everytime some well-intended new law gets passed.
I don't like the opposite any more though, i.e. commercial food being effectively limited to the lowest common denominator of allergens and other dietary as well as religious restrictions. I see that happen a lot more than this one example and it doesn't even need any laws to cause it.
There just can’t be a way to discriminate on the spectrum from “we use AI to tidy up the spelling and grammar” to “we just asked ChatGPT to write a story on x”, so the disclaimer will make it look like everyone just asked ChatGPT.
>There just can’t be a way to discriminate on the spectrum from “we use AI to tidy up the spelling and grammar” to “we just asked ChatGPT to write a story on x”
Why though? Did the AI play the role of an editor or did it play the role of a reporter seems like a clear distinction to me and likely anyone else familiar enough with how journalism works.
People know what it _should_ mean, but if you say that it’s fine to have an AI editor, then there will be a bunch of people saying something like “my reporting is that x is a story, and my editor, ChatGPT, just tidied that idea up into a full story”. There’s all sorts of hoops people can jump through like that. So you end up putting a banner on all AI, or only penalizing the honest people who follow the distinction that’s supposed to exist.
Fair enough, but my main response to that is that people need to support independent journalism. It's entirely possible I'm paying some fraud(s), but as someone who certainly spends more than the average person on online journalism, I trust the people I support at the very least know that putting their byline on an AI written article would be a career destroying scandal in the eyes of their current audience.
I'm fine with that. I want neither AI-hallucinated stories nor AI-expanded fluff. If it's not worth it for a real human editor it's probably not worth reading.
I just came across this for the first time. I ordered a precision screw driver kit and it came with a cancer warning on it. I was really taken aback and then learned about this.
Some legislation which sounds good in concept and is well-intended ends up being having little to no positive impact in practice. But it still leaves businesses with ongoing compliance costs/risks, taxpayers footing the bill for an enforcement bureaucracy forever and consumers with either annoying warning interruptions or yet more 'warning message noise'.
It's odd that legislators seem largely incapable of learning from the rich history of past legislative mistakes. Regulation needs to be narrowly targeted, clearly defined and have someone smart actually think through how the real-world will implement complying as well as identifying likely unintended consequences and perverse incentives. Another net improvement would be for any new regs passed to have an automatic sunset provision where they need to be renewed a few years later under a process which makes it easy to revise or relax certain provisions.
If you don't notice then it was probably not something you considered essential. Breaking the tracking of you and your personal information is kind of the point.
I do believe this is an unfair comparison. With tobacco the warnings are always true, but with prop 65 the product might not contain any cancer causing ingredients, but the warning is there just in case.
It's much easier to tell yourself prop 65 doesn't have to be avoided because "it's probably just there to cover their asses" wile tobacco products have real warnings that definitely mean danger (though there are people who convince themselves otherwise_
Also even if there's a prop 65 warning because there are cancer-causing ingredients, those ingredients may not be user-accessible or may be in tiny enough quantities that they'd statistically never result in cancer even with lifetime use by every human on the planet. E.g. lead in a circuit board inside an IP-68 rated sealed device would require a prop 65 warning even though it won't pose any cancer risk to the user unless they grind up the device & ingest or inhale the lead.
But that is because the requirement is binary - warning vs. no warning. This problem doesn't happen if the requirement is to disclose what was used although it could still lead to other issues.
I don't know of anyone (seriously not one person) who actually believes those labels. And the reason why is precisely because the government was foolish enough to put them on everything under the sun. Now nobody listens to them because the seriousness got diluted.
The primary obstacle is discussions like this one. It will be enforced if people insist it's enforced - the power comes from the voters. If a large portion of the population - especially the informed population, represented to some extent here on HN - thinks it's hopeless then it will be. If they believe they will get together to make it succeed, it will. It's that simple: Whatever people believe is the number one determination of outcome. Why do you think so many invest so much in manipulating public opinion?
Many people here love SV hackers who have done the impossible, like Musk. Could you imagine this conversation at an early SpaceX planning meeting? That was a much harder task, requiring inventing new technology and enormous sums of money.
Lots of regulations are enforced and effective. Your food, drugs, highways, airplane flights, etc. are all pretty safe. Voters compelling their representatives is commonplace.
It's right out of psyops to get people to despair - look at messages used by militaries targeted at opposing troops. If those opposing this bill created propaganda, it would look like the comments in this thread.
> Because no one believes these laws or bills or acts or whatever will be enforced.
That’s because they can’t be.
People assume they’ve already figured out how AI behaves and that they can just mandate specific "proper" ways to use it.
The reality is that AI companies and users are going to keep refining these tools until they're indistinguishable from human work whenever they want them to be.
Even if the models still make mistakes, the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.
You’re essentially passing laws that only apply to people who volunteer to follow them, because once someone decides to hide their AI use, you won't be able to prove it anyway.
> the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.
By that token bans on illegal drugs are fantasy. Whereas in fact, enforcement doesn't need to be guaranteed to be effective.
There may be little technical means to distinguish at the moment. But could that have something to do with lack of motivation? Let's see how many "AI" $$$ suddenly become available to this once this law provides the incentive.
I always wanted to try specific two, but first cannot be had in the safest form because of the specific precursor ban, and all of them suffer from an insane (to me) risk of adulteration.
In twenty minutes I could probably find 10 "reputable" shops/markets, but still with 0 guarantee I won't get the specific thing laced with something for strength.
Even if I wanted pot (I don't, I found it repetitive and extremely boring, except for one experience), I would have to grow it myself (stench!) but then.... where I find sane seeds (healthy ratio CBD to THC)?
Similarly, I wouldn't buy the moonshine from someone risking prosecution to make and sell it. It's guaranteed this risk is offset.
So ... I can't get what I want because there's extremely high chance of getting hurt. An example being poisoning with pills sold as mdma - every music festival, multiple people hurt. Not by Molly, by additives.
I don't want a random weed. I can easily get it myself on the street (there are several places with the distinct smell), and I know at least 3-4 people who I know smoke.
But I want safe (not pcp/fentanyl sprinkled) and sane (not engineered for a 'kick').
I don't know anyone who's a cultivator themselves :)
Sure they can be enforced. Your comment seems to be based on the idea of detecting AI writing from the output. But you can enforce this law based on the way content is created. The same way you can enforce food safety laws from conditions of the kitchen, not the taste of the food. Child labor laws can be enforced. And so on.
Unless you're trying to tell me that writers won't report on their business that's trying to replace them with AI.
The idea that you can just ban drinking and driving is a fantasy because there’s no technical way to actually guarantee enforcement.
I know that sounds ridiculous but it kind of illustrates the problem with your logic. We don’t just write laws that are guaranteed to have 100% compliance and/or 100% successful enforcement. If that were the case, we’d have way fewer laws and little need for courts/a broader judicial system.
The goal is getting most AI companies to comply and making sure that most of those that don’t follow the law face sufficient punishment to discourage them (and others). Additionally, you use that opportunity to undo what damage you can, be it restitution or otherwise for those negatively impacted.
No, that doesn't really work so well. A lot of the LLM style hallmarks are still present when you ask them to write in another style, so a good quantitative linguist can find them: https://hdsr.mitpress.mit.edu/pub/pyo0xs3k/release/2
That was with GPT4, but my own work with other LLMs show they have very distinctive styles even if you specifically prompt them with a chunk of human text to imitate. I think instruction-tuning with tasks like summarization predisposes them to certain grammatical structures, so their output is always more information-dense and formal than humans.
> passing laws that only apply to people who volunteer to follow them
That's a concerning lens to view regulations. Obviously true, but for all laws. Regulations don't apply to only to what would be immediately observable offenses.
There are lots of bad actors and instances where the law is ignored because getting caught isn't likely. Those are conspiracies! They get harder to maintain with more people involved and the reason for whistle-blower protections.
VW's Dieselgate[1] comes to mind albeit via measurable discrepancy. Maybe Enron or WorldCom (via Cynthia Cooper) [2] is a better example.
But most regulations are, and can be, enforced because the perpetrator can simply be caught. That’s the difference. This is not enforceable in any meaningful way. The only way it could change anything would be through whistleblowers, for example someone inside a major outlet like the New York Times reporting to authorities that AI was being used. On the contrary, if you systematically create laws that are, by their nature, impossible to enforce, you weaken trust in the law itself by turning it into something that exists more on paper than in reality.
* I suspect many existing and reasonable regulations do not meet that "simply caught" classification. @rconti's comment above[1] gives some examples of regulations on process that are not observed in the output (food, child labor). I'll add accounting, information control (HIPAA, CUI, etc), environmental protections.
* Newsroom staff is incentivized to enforce the regulation. It protects their livelihood. From the article:
> Notably, the bill would cement some labor protections for newsroom workers
* Mandatory AI labeling is not impossible to enforce. At worst, it requires random audits (who was paid to write this story, do they attest to doing so). At best, it encourages preemptive provenance tracking (that could even be accessible to the news consumer! I'd like that).
One reason for the regulation is we fear hallucinations slipping into the public record -- even if most LLM usage is useful/harmless. Legal restrictions ideally prevent this, but also give a mechanism for recourse when it does happen.
Say a news story goes off the rails and reports a police officer turned into a frog [2] or makes up some law[3]. Someone thinks that's odd and alerts whatever authority. The publisher can be investigated, reprimanded, and ideally motivated to provide better labeling or QC on their LLM usage.
Probably worse than that. I can totally see it being weaponized. A media company critic o a particular group or individual being scrutinized and fined. I haven’t looked at any of these laws, but I bet their language gives plenty of room for interpretation and enforcement, perhaps even if you are not generating any content with AI.
>But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally.
As with everything else BigCo with their legal team will explain to the enforcers why their "right up to the line if not over it" solution is compliant and mediumco and smallco will be the ones getting fined or being forced to waste money staying far from the line or paying a 3rd party to do what bigco's legal team does at cost.
Highly selective enforcement along partisan lines to suppress dissent. Government officials forcing you to prove that your post is not AI generated if they don't like it. Those same officials claiming that it is AI generated regardless of the facts on the ground to have it removed and you arrested.
If you assume the use of law will be that capricious in general, then any law at all would be considered too dangerous for fear of use as a partisan tool.
Why accuse your enemies of using AI-generated content in posts? Just call them domestic terrorists for violently misleading the public via the content of their posts and send the FBI or DHS after them. A new law or lack thereof changes nothing.
> SAFE for Kids Act [pending] (restricts algorithmic feeds for minors).
i personally would love to see something like this but changed a little:
for every user (not just minors) require a toggle: upfront, not buried, always in your face toggle to turn off algorithmic feeds, where you’ll only see posts from people you follow, in the order in which they post it. again, no dark patterns, once a user toggles to a non-algorithmic feeds, it should stick.
this would do a lot to restore trust. i don’t really use the big social medias much any more, but when i did i can not tell you how many posts i missed because the algorithms are kinda dumb af. like i missed friends anniversary celebrations, events that were right up my alley, community projects, etc… because the algorithms didn’t think the posts announcing the events would be addictive enough for me.
no need to force it “for the kids” when they can just give everyone the choice.
None of those bills/laws involve legislating publishing though. This bill would require a disclaimer on something published. That’s a freedom of speech issue, so it going to be tougher to enforce and keep from getting overturned in the courts. The question here are what are the limits the government can have on what a company publishes, regardless of how the content is generated.
IMO, It’s a much tougher problem (legally) than protecting actors from AI infringement on their likeness. AI services are easier to regulate.. published AI generated content, much more difficult.
The article also mentions efforts by news unions of guilds. This might be a more effective mechanism. If a person/union/guild required members to add a tagline in their content/articles, this would have a similar effect - showing what is and what is not AI content without restricting speech.
Not thrilled about it, and I personally would rather see them repealed. I will concede compelled speech impositions have been interpreted more generously when they are commercial. I don't necessarily agree with it, but even if we concede they can happen, I hope that distinction is made for commercial vs non-commercial content. Though I'm not thrilled with it happening for either.
I agree in general and that should be the position but it's probably more nuanced than this in practice: who published it when it's a dev that writes a script that just spits junk into the wild or reinforces someone else's troll-speech?
In general, I think LLM content has been found to not be copyrightable, but it would still speech when it's published. It would be the speech of the company publishing it, not the dev that wrote the script. So, ai-junk-news.com is still publishing some kind of speech, even if it was an LLM that wrote it. At least, that would be my interpretataion.
I'll bet AI is going to be simply outlawed for hiring, and possibly algorithmic hiring practices altogether. You can't audit a non-deterministic system unless you train the AI from scratch, which is an expense only the wealthiest companies can take on.
Don't ding the amusingly scoped animosity, it's very convenient: we get to say stuff like "Sure, our laws may keep us at the mercy of big corps unlike these other people, BUT..." and have a ready rationalization for why our side is actually still superior when you look at it. Imagine what would happen if the populace figured it's getting collectively shafted in a way others may not.
I believe it’s because it will be impossible to enforce. It might have some teeth with LLMs that add watermarks to their images but otherwise you could have one human in the loop for 10,000 articles and not call it AI.
I honestly just don't see any point in these laws because they're all predicated on the people who own the AI's acting in good faith. In a way I actually think they're a net negative because they seem to be giving a false impression that these problems have an obvious solution.
One of the most persistent and also the dumbest opinion I keep seeing both among laymen and people who really ought to know better is that we can solve the deepfake problem by mandating digital watermarks on generated content.
~Everything will use AI at some point. This is like requiring a disclaimer for using Javascript back when it was introduced. It's unfortunate but I think ultimately a losing battle.
Plus if you want to mandate it, hidden markers (stenography) to verify which model generated the text so people can independently verify if articles were written by humans (emitted directly by the model) is probably the only feasible way. But its not like humans are impartial anyway already when writing news so I don't even see the point of that.
It would make sense to have a more general law about accountability for the contents of news. If news is significantly misleading or plagiarizing, it shouldn’t matter if it is due to the use of AI or not, the human editorship should be liable in either case.
This is a concept at least in some EU countries, that there has to always be one person responsible in terms of press law for what is being published.
That's government censorship and it not allowed here, unlike the EU. As for plagiarism, every single major news outlet is guilty of it in basically every single article. Have you ever seen the NYT cite a source?
You’re still allowed to say virtually anything you want if you make it clear that it’s an opinion and not news reporting.
Not citing sources doesn’t imply plagiarism, as long as you don’t misrepresent someone else’s research as your own (such as in an academic paper). Giving an account of news that you heard elsewhere in your own words isn’t plagiarism. The hurdles for plagiarism are generally relatively high.
If a news person in the USA publishes something that's actually criminal, the the corporate veil can be pierced. If the editor printed CSAM they would be in prison lickity split. Unless they have close connections to the executive.
Most regulations around disclaimers in the USA are just civil and the corporate veil won't be pierced.
I agree with that the most. That's why I added the bit about humans. In the end if what you're writing is not sourced properly or too biased it shouldn't matter if AI is involved or not. The truth is more the thing that matters with news.
> I'm surprised to see so little coverage of AI legislation news here tbh.
I think the reason is that most people don't believe, at least on sufficiently long times scales, that legacy states are likely to be able to shape AI (or for that matter, the internet). The legitimacy of the US state appears to be in a sort of free-fall, for example.
It takes a long time to fully (or even mostly) understand the various machinations of legislative action (let alone executive discretion, and then judicial interpretation), and in that time, regardless of what happens in various capitol buildings, the tests pass and the code runs - for better and for worse.
And even amidst a diversity of views/assessments of the future of the state, there seems to be near consensus regarding the underlying impetus: obviously humans and AI are distinct, and hearing the news from a human, particular a human with a strong web-of-trust connection in your local society, is massively more credible. What's not clear is whether states have a role to play in lending clarity to the situation, or whether that will happen of the internet's accord.
It's kinda funny the oft-held animosity towards EU's heavy-handed regulations when navigating US state law is a complete minefield of its own.