Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Because no one believes these laws or bills or acts or whatever will be enforced.

That’s because they can’t be.

People assume they’ve already figured out how AI behaves and that they can just mandate specific "proper" ways to use it.

The reality is that AI companies and users are going to keep refining these tools until they're indistinguishable from human work whenever they want them to be.

Even if the models still make mistakes, the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.

You’re essentially passing laws that only apply to people who volunteer to follow them, because once someone decides to hide their AI use, you won't be able to prove it anyway.





> the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.

By that token bans on illegal drugs are fantasy. Whereas in fact, enforcement doesn't need to be guaranteed to be effective.

There may be little technical means to distinguish at the moment. But could that have something to do with lack of motivation? Let's see how many "AI" $$$ suddenly become available to this once this law provides the incentive.


> By that token bans on illegal drugs are fantasy.

I think you have this exactly right. They are mostly enforced against the poor and political enemies.


Well considering how ineffective the War on Drugs has been - is that really a great analogy?

> considering how ineffective the War on Drugs has been

Relative to no war on drugs? Who knows.


Has there ever been a single person who wants an illegal drug that couldn’t get one because it was illegal?

Just a quick Google search g estimates that less than 3% of drugs are intercepted by the government.


Me. There are four I want. All very safe.

I always wanted to try specific two, but first cannot be had in the safest form because of the specific precursor ban, and all of them suffer from an insane (to me) risk of adulteration.

In twenty minutes I could probably find 10 "reputable" shops/markets, but still with 0 guarantee I won't get the specific thing laced with something for strength.

Even if I wanted pot (I don't, I found it repetitive and extremely boring, except for one experience), I would have to grow it myself (stench!) but then.... where I find sane seeds (healthy ratio CBD to THC)?

Similarly, I wouldn't buy the moonshine from someone risking prosecution to make and sell it. It's guaranteed this risk is offset.

So ... I can't get what I want because there's extremely high chance of getting hurt. An example being poisoning with pills sold as mdma - every music festival, multiple people hurt. Not by Molly, by additives.


I’m absolutely positive that someone in your 1st degree or 2nd degree social circle can get you weed if you wanted it.

I don't want a random weed. I can easily get it myself on the street (there are several places with the distinct smell), and I know at least 3-4 people who I know smoke.

But I want safe (not pcp/fentanyl sprinkled) and sane (not engineered for a 'kick').

I don't know anyone who's a cultivator themselves :)


Sure they can be enforced. Your comment seems to be based on the idea of detecting AI writing from the output. But you can enforce this law based on the way content is created. The same way you can enforce food safety laws from conditions of the kitchen, not the taste of the food. Child labor laws can be enforced. And so on.

Unless you're trying to tell me that writers won't report on their business that's trying to replace them with AI.


> You’re essentially passing laws that only apply to people who volunteer to follow them . .

Like every law passed forever (not quite but you get the picture!) [1]

1. https://en.wikipedia.org/wiki/Consent_of_the_governed


The idea that you can just ban drinking and driving is a fantasy because there’s no technical way to actually guarantee enforcement.

I know that sounds ridiculous but it kind of illustrates the problem with your logic. We don’t just write laws that are guaranteed to have 100% compliance and/or 100% successful enforcement. If that were the case, we’d have way fewer laws and little need for courts/a broader judicial system.

The goal is getting most AI companies to comply and making sure that most of those that don’t follow the law face sufficient punishment to discourage them (and others). Additionally, you use that opportunity to undo what damage you can, be it restitution or otherwise for those negatively impacted.


C2PA-enabled cameras (Sony Alpha range, Leica, and the Google Pixel 10) sign the digital images they record.

So legislators, should they so choose, could demand source material be recorded on C2PA enabled cameras and produce the original recordings on demand.


And you can easily prompt your way out of the typical LLM style. “Written in the style of Cormac McCarthy’s The Road”

No, that doesn't really work so well. A lot of the LLM style hallmarks are still present when you ask them to write in another style, so a good quantitative linguist can find them: https://hdsr.mitpress.mit.edu/pub/pyo0xs3k/release/2

That was with GPT4, but my own work with other LLMs show they have very distinctive styles even if you specifically prompt them with a chunk of human text to imitate. I think instruction-tuning with tasks like summarization predisposes them to certain grammatical structures, so their output is always more information-dense and formal than humans.


This still doesn't remove all the slop. You need sampler or fine-tuning tricks for it. https://arxiv.org/abs/2510.15061

> passing laws that only apply to people who volunteer to follow them

That's a concerning lens to view regulations. Obviously true, but for all laws. Regulations don't apply to only to what would be immediately observable offenses.

There are lots of bad actors and instances where the law is ignored because getting caught isn't likely. Those are conspiracies! They get harder to maintain with more people involved and the reason for whistle-blower protections.

VW's Dieselgate[1] comes to mind albeit via measurable discrepancy. Maybe Enron or WorldCom (via Cynthia Cooper) [2] is a better example.

[1]: https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal [2]: https://en.wikipedia.org/wiki/MCI_Inc.#Accounting_scandals


But most regulations are, and can be, enforced because the perpetrator can simply be caught. That’s the difference. This is not enforceable in any meaningful way. The only way it could change anything would be through whistleblowers, for example someone inside a major outlet like the New York Times reporting to authorities that AI was being used. On the contrary, if you systematically create laws that are, by their nature, impossible to enforce, you weaken trust in the law itself by turning it into something that exists more on paper than in reality.

  * I suspect many existing and reasonable regulations do not meet that "simply caught" classification. @rconti's comment above[1] gives some examples of regulations on process that are not observed in the output (food, child labor). I'll add accounting, information control (HIPAA, CUI, etc), environmental protections.

  * Newsroom staff is incentivized to enforce the regulation. It protects their livelihood. From the article: 
  > Notably, the bill would cement some labor protections for newsroom workers 

  * Mandatory AI labeling is not impossible to enforce. At worst, it requires random audits (who was paid to write this story, do they attest to doing so). At best, it encourages preemptive provenance tracking (that could even be accessible to the news consumer! I'd like that).   
One reason for the regulation is we fear hallucinations slipping into the public record -- even if most LLM usage is useful/harmless. Legal restrictions ideally prevent this, but also give a mechanism for recourse when it does happen.

Say a news story goes off the rails and reports a police officer turned into a frog [2] or makes up some law[3]. Someone thinks that's odd and alerts whatever authority. The publisher can be investigated, reprimanded, and ideally motivated to provide better labeling or QC on their LLM usage.

[1]: https://news.ycombinator.com/item?id=46915463 [2]: https://www.wate.com/news/ai-generated-police-report-says-of... [3]: https://www.reuters.com/legal/litigation/judge-fines-lawyers...


Indistinguishable, no. Not these tools.

Without emotion, without love and hate and fear and struggle, only a pale imitation of the human voice is or will be possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: