> Any time you use something with "fair use" in mind, it is the equivalent of saying, "I'm going to steal this, and hopefully, a court agrees that this is fair use."
Thousands of reviews, book reports, quotations on fan sites and so on are published daily; you seem to be arguing that they are all copyright violations unless and until the original copyright holder takes those reviewers, seventh graders, and Tumblr stans to court and loses, at which point they are now a-ok. To quote a meme in a way that I'm pretty sure does, in fact, fall under fair use: "That's not the way any of this works."
> There is a crap load of case law showing "research for commercial purposes is not fair use,"
While you may be annoyed with the OP for asking you to name a bit of that case law, it isn't an unreasonable demand. For instance:
"As a general matter, educational, nonprofit, and personal uses are favored as fair uses. Making a commercial use of a work typically weighs against fair use, but a commercial use does not automatically defeat a fair use claim. 'Transformative' uses are also favored as fair uses. A use is considered to be transformative when it results in the creation of an entirely new work (as opposed to an adaptation of an existing work, which is merely derivative)."
This is almost certainly going to be used by AI companies as part of their defense against such claims; "transformative uses" have literally been name-checked by courts. It's also been established that commercial companies can ingest mountains of copyrighted material and still fall under the fair use doctrine -- this is what the whole Google Books case about a decade ago was about. Google won.
I feel like you're trying to make a moral argument against generative AI, one that I largely agree with, but a moral argument is not a legal argument. If you want to make a legal argument against generative AI with respect to copyright violation and fair use, perhaps try something like:
- The NYT's case against OpenAI involves being able to get ChatGPT to spit out large sections of NYT articles given prompts like "here is the article's URL and here is the first paragraph of the article; tell me what the rest of the text is". OpenAI and its defenders have argued that such prompts aren't playing fair, but "you have to put some effort into getting our product to commit clear copyright violation" is a rather thin defense.
- A crucial test of fair use is "the effect of the use upon the potential market for or value of the copyrighted work" (quoting directly from the relevant law). If an image generator can be told to do new artwork in a specific artist's style, and it can do a credible job of doing so, and it can be reasonably established that the training model included work from the named artist, then the argument the generator is damaging the market for that artist's work seems quite compelling.
Think it’s time to rethink do journalists actually own the rights to articles about the lives and actions of others.
It’s not Harry Potter, they wouldn’t have written those words without someone else doing something of note the journo has nothing to do with, they just observed from a far and wrote the words about what happened from memory.
Kinda like an AI reads the action of their writing then can report on their writing.
It’s all just reporting on the actions of another, if the AI is in the wrong and needed to ask consent then the journo needs to ask consent from those they write about too.
> Thousands of reviews, book reports, quotations on fan sites and so on are published daily; you seem to be arguing that they are all copyright violations unless and until the original copyright holder takes those reviewers, seventh graders, and Tumblr stans to court and loses, at which point they are now a-ok.
That is precisely what I am arguing about and how it works. People have sued reviewers for including too much of the original text in the review ... and won[1]. Or simply having custom movie poster depicting too much of the original[2].
> "transformative uses" have literally been name-checked by courts. It's also been established that commercial companies can ingest mountains of copyrighted material and still fall under the fair use doctrine -- this is what the whole Google Books case about a decade ago was about. Google won.
Google had a much simpler argument than transforming the text. They were allowing people to search for the text within books (including some context). In this case, AI's product wouldn't even work without the original work by the authors, and then transforms it into something else "the author would have never thought of", without attributing the original[3]. I don't think this will be a valid defense...
> I feel like you're trying to make a moral argument against generative AI, one that I largely agree with, but a moral argument is not a legal argument.
A jury would decide these cases, as "fair use" is incredibly subjective and would depend on how the jury was stacked. Stealing other people's work is illegal, which eventually triggers a lawsuit. Then, it falls on humans (either a jury or judge) to determine fair use and how it applies to their situation. Everything from intent to motivation to morality to how pompous the defense looks will influence the final decision.[4]
The link you provide to back up "people have sued reviewers for including too much of the original tet in the review" doesn't say that at all, though. The Nation lost that case because (quoting from that Cornell article you linked),
> [Nation editor Victor Navasky] hastily put together what he believed was "a real hot news story" composed of quotes, paraphrases, and facts drawn exclusively from the manuscript. Mr. Navasky attempted no independent commentary, research or criticism, in part because of the need for speed if he was to "make news" by "publish[ing] in advance of publication of the Ford book." [...] The Nation effectively arrogated to itself the right of first publication, an important marketable subsidiary right.
The Nation lost this case in large part because it was not a review, but instead an attempt to beat Time Magazine's article that was supposed to be an exclusive first serial right. If it had, in fact, just been a review, there wouldn't have been a case here, because it wouldn't have been stealing.
Anyway, I don't think you're going to be convinced you're interpreting this wrongly, and I don't think I'm going to be convinced I'm interpreting it wrongly. But I am going to say, with absolute confidence, that you're simply not going to find many cases of reviewers being sued for reviews -- which Harper & Row vs. Nation is, again, not actually an example of -- and you're going to find even fewer cases of that being successful. Why am I so confident about that? Well, I am not a lawyer, but I am a published author, and I am going to let you in a little secret here: both publishers and authors do, in fact, want their work to be reviewed, and suing reviewers for literally doing what we want is counterproductive. :)
Thousands of reviews, book reports, quotations on fan sites and so on are published daily; you seem to be arguing that they are all copyright violations unless and until the original copyright holder takes those reviewers, seventh graders, and Tumblr stans to court and loses, at which point they are now a-ok. To quote a meme in a way that I'm pretty sure does, in fact, fall under fair use: "That's not the way any of this works."
> There is a crap load of case law showing "research for commercial purposes is not fair use,"
While you may be annoyed with the OP for asking you to name a bit of that case law, it isn't an unreasonable demand. For instance:
https://guides.nyu.edu/fairuse#:~:text=As%20a%20general%20ma....
"As a general matter, educational, nonprofit, and personal uses are favored as fair uses. Making a commercial use of a work typically weighs against fair use, but a commercial use does not automatically defeat a fair use claim. 'Transformative' uses are also favored as fair uses. A use is considered to be transformative when it results in the creation of an entirely new work (as opposed to an adaptation of an existing work, which is merely derivative)."
This is almost certainly going to be used by AI companies as part of their defense against such claims; "transformative uses" have literally been name-checked by courts. It's also been established that commercial companies can ingest mountains of copyrighted material and still fall under the fair use doctrine -- this is what the whole Google Books case about a decade ago was about. Google won.
I feel like you're trying to make a moral argument against generative AI, one that I largely agree with, but a moral argument is not a legal argument. If you want to make a legal argument against generative AI with respect to copyright violation and fair use, perhaps try something like:
- The NYT's case against OpenAI involves being able to get ChatGPT to spit out large sections of NYT articles given prompts like "here is the article's URL and here is the first paragraph of the article; tell me what the rest of the text is". OpenAI and its defenders have argued that such prompts aren't playing fair, but "you have to put some effort into getting our product to commit clear copyright violation" is a rather thin defense.
- A crucial test of fair use is "the effect of the use upon the potential market for or value of the copyrighted work" (quoting directly from the relevant law). If an image generator can be told to do new artwork in a specific artist's style, and it can do a credible job of doing so, and it can be reasonably established that the training model included work from the named artist, then the argument the generator is damaging the market for that artist's work seems quite compelling.