Few people are claiming that the AI itself has the same rights as a human. They are arguing that a human with an AI has the same rights as a human who doesn't have an AI.
> They are arguing that a human with an AI has the same rights as a human who doesn't have an AI.
This is the analogy I want people against AI use to understand and never forget, even if they reject the underlying premise - that should laws treat a human who uses AI for a certain purpose identically to a human who uses nothing or a non-AI tool for the same purpose.
> Few people are claiming that the AI itself has the same rights as a human.
I think that's the case as well. However, a lot of commenters on this post are claiming that an AI is similar in behavior to a human, and trying to use the behavior analogy as the basis for justifying AI training (on legally-obtained copies of copyrighted works), with the assumption that justifying training justifies use. My personal flow of logic is the reverse: human who uses AI should be legally the same as human who uses a non-AI tool, so AI use is justified, so training on legally-obtained copies of copyrighted works is justified.
I want people in favor of AI use particularly to understand your human-with-AI-to-human-without-AI analogy (for short, the tool analogy) and to avoid machine-learning-to-human-learning analogies (for short, behavior analogies). The tool analogy is based on a belief about how people should treat each other, and contends with opposing beliefs about how people should treat each other. An behavior analogy must contend with both 1. opposing beliefs about how people should treat each other and 2. contradictions from reality about how similar machine learning is to brain learning. (Admittedly, both the tool analogy and the behavior analogy must contend with the net harm AI use is having and will have on the cultural and economic significance of human-made creative works.)