Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have no idea how an LLM company can make any argument that their use of content to train the models is allowed that doesn't equally apply to the distillers using an LLM output.

"The distilled LLM isn't stealing the content from the 'parent' LLM, it is learning from the content just as a human would, surely that can't be illegal!"...

 help



The argument is that converting static text into an LLM is sufficiently transformative to qualify for fair use, while distilling one LLM's output to create another LLM is not. Whether you buy that or not is up to you, but I think that's the fundamental difference.

The whole notion of 'distillation' at a distance is extremely iffy anyway. You're just training on LLM chat logs, but that's nowhere near enough to even loosely copy or replicate the actual model. You need the weights for that.

> The U.S. Court of Appeals for the D.C. Circuit has affirmed a district court ruling that human authorship is a bedrock requirement to register a copyright, and that an artificial intelligence system cannot be deemed the author of a work for copyright purposes

> The court’s decision in Thaler v. Perlmutter,1 on March 18, 2025, supports the position adopted by the United States Copyright Office and is the latest chapter in the long-running saga of an attempt by a computer scientist to challenge that fundamental principle.

I, like many others, believe the only way AI won't immediately get enshittified is by fighting tooth and nail for LLM output to never be copyrightable

https://www.skadden.com/insights/publications/2025/03/appell...


Thaler v. Perlmutter is an a weird case because Thaler explicitly disclaimed human authorship and tried to register a machine as the author.

Whereas someone trying to copyright LLM output would likely insist that there is human authorship is via the choice of prompts and careful selection of the best LLM output. I am not sure if claims like that have been tested.


The US copyright office has published a statement that they see AI output analogous to a human contracting the work out to a machine. The machine would hold the copyright, but can't, consequently there is none. Which is imho slightly surprising since your argument about choice of prompt and output seems analogous to the argument that lead to photographs being subject to copyright despite being made by a machine.

On the other hand in a way the opinion of the US copyright office doesn't matter, what matters is what the courts decide


It's a fine line that's been drawn, but this ruling says that AI can't own a copyright itself, not that AI output is inherently ineligible for copyright protection or automatically public domain. A human can still own the output from an LLM.

> A human can still own the output from an LLM.

It specifically highlights human authorship, not ownership


>I, like many others, believe the only way AI won't immediately get enshittified is by fighting tooth and nail for LLM output to never be copyrightable

If the person who prompted the AI tool to generate something isn't considered the author (and therefore doesn't deserve copyright), then does that mean they aren't liable for the output of the AI either?

Ie if the AI does something illegal, does the prompter get off scot-free?


When you buy, or pirate, a book, you didn't enter into a business relationship with the author specifically forbidding you from using the text to train models. When you get tokens from one of these providers, you sort of did.

I think it's a pretty weak distinction and by separating the concerns, having a company that collects a corpus and then "illegally" sells it for training, you can pretty much exactly reproduce the acquire-books-and-train-on-them scenario, but in the simplest case, the EULA does actually make it slightly different.

Like, if a publisher pays an author to write a book, with the contract specifically saying they're not allowed to train on that text, and then they train on it anyway, that's clearly worse than someone just buying a book and training on it, right?


> When you buy, or pirate, a book, you didn't enter into a business relationship with the author specifically forbidding you from using the text to train models.

Nice phrasing, using "pirate".

Violating the TOS of an LLM is the equivalent of pirating a book.


Contracts can't exclude things that weren't invented when the contracts were written.

Ultimately it's up to legislation to formalize rules, ideally based on principles of fairness. Is it fair in non-legalistic sense for all old books to be trainable-on, but not LLM outputs?


Because the terms by each provider are different

American Model trains on public data without a "do not use this without permission" clause.

Chinese models train on models that have a "you will not reverse engineer" clause.


> American Model trains on public data without a "do not use this without permission" clause.

this is going through various courts right now, but likely not




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: