Project Syndicate
Project Syndicate

AI’s copyright problem is fixable

The US Copyright Office recently issued guidance stating that the output of image-generating AI isn’t copyrightable unless human creativity went into the prompts that generated it. Photo: REUTERS

Generative artificial intelligence stretches current copyright law in unforeseen and uncomfortable ways. The US Copyright Office recently issued guidance stating that the output of image-generating AI isn't copyrightable unless human creativity went into the prompts that generated it. But how much creativity is needed? And is it the same kind of creativity that an artist exercises with a paintbrush?

Other cases deal with text (typically novels and novelists), where some argue that training a model on copyrighted material is itself copyright infringement, even if the model never reproduces those texts as part of its output. But reading texts has been part of the human learning process for as long as written language has existed. While we pay to buy books, we don't pay to learn from them.

What should copyright law mean in the age of AI? Technologist Jaron Lanier offers one answer with his idea of data dignity, which implicitly distinguishes between training (or "teaching") a model and generating output using a model: the former should be a protected activity, whereas output may indeed infringe on someone's copyright.

This distinction is attractive for several reasons. First, current copyright law protects "transformative uses … that add something new," and it is quite obvious that this is what AI models are doing. Moreover, it is not as though large language models (LLMs) like ChatGPT contain the full text of, say, George RR Martin's fantasy novels, from which they are brazenly copying and pasting.

Rather, the model is an enormous set of parameters—based on all the content ingested during training—that represent the probability that one word is likely to follow another. When these probability engines emit a Shakespearean sonnet that Shakespeare never wrote, that's transformative, even if the new sonnet isn't remotely good.

Lanier sees the creation of a better model as a public good that serves everyone – even the authors whose works are used to train it. That makes it transformative and worthy of protection. But there is a problem with his concept of data dignity (which he fully acknowledges): it is impossible to distinguish meaningfully between "training" current AI models and "generating output" in the style of, say, novelist Jesmyn Ward.

AI developers train models by giving them smaller bits of input and asking them to predict the next word billions of times, tweaking the parameters slightly along the way to improve the predictions. But the same process is then used to generate output, and therein lies the problem from a copyright standpoint. A model prompted to write like Shakespeare may start with the word "To," which makes it slightly more probable that it will follow that with "be," which makes it slightly more probable that the next word will be "or." Even so, it remains impossible to connect that output back to the training data.

Where did the word "or" come from? While it happens to be the next word in Hamlet's famous soliloquy, the model wasn't copying Hamlet. It simply picked "or" out of the hundreds of thousands of words it could have chosen, all based on statistics.

But how, then, can authors be compensated for their work when appropriate? In the year or so since ChatGPT's release, developers have been building applications on top of the existing foundation models. Many use retrieval-augmented generation (RAG) to allow an AI to "know about" content that isn't in its training data. If you need to generate text for a product catalogue, you can upload your company's data and then send it to the AI model with the instructions: "Only use the data included with this prompt in the response." RAG incidentally creates a connection between the model's response and the documents from which the response was created. That means we now have provenance, which brings us much closer to realising Lanier's vision of data dignity.

Google's "AI-powered overview" feature is a good example of what we can expect with RAG. Since Google already has the world's best search engine, its summarisation engine should be able to respond to a prompt by running a search and feeding the top results into an LLM to generate the overview the users asked for. The model would provide the language and grammar, but it would derive the content from the documents included in the prompt. Again, this would provide the missing provenance.

Now that we know it is possible to produce output that respects copyright and compensates authors, regulators need to step up to hold companies accountable for failing to do so. We should not accept leading LLM providers' claim that the task is technically impossible. In fact, it is another of the many business-model and ethical challenges that they can and must overcome.

We are only just beginning to see what is possible with this approach. RAG applications will undoubtedly become more layered and complex. But now that we have the tools to trace provenance, tech companies no longer have an excuse for copyright unaccountability.


Mike Loukides is vice president of content strategy for O'Reilly Media, Inc.


Tim O'Reilly is founder and CEO of O'Reilly Media, Inc and visiting professor at University College London Institute for Innovation and Public Purpose.


Views expressed in this article are the authors' own. 


Follow The Daily Star Opinion on Facebook for the latest opinions, commentaries and analyses by experts and professionals. To contribute your article or letter to The Daily Star Opinion, see our guidelines for submission.

Comments

Project Syndicate

AI’s copyright problem is fixable

The US Copyright Office recently issued guidance stating that the output of image-generating AI isn’t copyrightable unless human creativity went into the prompts that generated it. Photo: REUTERS

Generative artificial intelligence stretches current copyright law in unforeseen and uncomfortable ways. The US Copyright Office recently issued guidance stating that the output of image-generating AI isn't copyrightable unless human creativity went into the prompts that generated it. But how much creativity is needed? And is it the same kind of creativity that an artist exercises with a paintbrush?

Other cases deal with text (typically novels and novelists), where some argue that training a model on copyrighted material is itself copyright infringement, even if the model never reproduces those texts as part of its output. But reading texts has been part of the human learning process for as long as written language has existed. While we pay to buy books, we don't pay to learn from them.

What should copyright law mean in the age of AI? Technologist Jaron Lanier offers one answer with his idea of data dignity, which implicitly distinguishes between training (or "teaching") a model and generating output using a model: the former should be a protected activity, whereas output may indeed infringe on someone's copyright.

This distinction is attractive for several reasons. First, current copyright law protects "transformative uses … that add something new," and it is quite obvious that this is what AI models are doing. Moreover, it is not as though large language models (LLMs) like ChatGPT contain the full text of, say, George RR Martin's fantasy novels, from which they are brazenly copying and pasting.

Rather, the model is an enormous set of parameters—based on all the content ingested during training—that represent the probability that one word is likely to follow another. When these probability engines emit a Shakespearean sonnet that Shakespeare never wrote, that's transformative, even if the new sonnet isn't remotely good.

Lanier sees the creation of a better model as a public good that serves everyone – even the authors whose works are used to train it. That makes it transformative and worthy of protection. But there is a problem with his concept of data dignity (which he fully acknowledges): it is impossible to distinguish meaningfully between "training" current AI models and "generating output" in the style of, say, novelist Jesmyn Ward.

AI developers train models by giving them smaller bits of input and asking them to predict the next word billions of times, tweaking the parameters slightly along the way to improve the predictions. But the same process is then used to generate output, and therein lies the problem from a copyright standpoint. A model prompted to write like Shakespeare may start with the word "To," which makes it slightly more probable that it will follow that with "be," which makes it slightly more probable that the next word will be "or." Even so, it remains impossible to connect that output back to the training data.

Where did the word "or" come from? While it happens to be the next word in Hamlet's famous soliloquy, the model wasn't copying Hamlet. It simply picked "or" out of the hundreds of thousands of words it could have chosen, all based on statistics.

But how, then, can authors be compensated for their work when appropriate? In the year or so since ChatGPT's release, developers have been building applications on top of the existing foundation models. Many use retrieval-augmented generation (RAG) to allow an AI to "know about" content that isn't in its training data. If you need to generate text for a product catalogue, you can upload your company's data and then send it to the AI model with the instructions: "Only use the data included with this prompt in the response." RAG incidentally creates a connection between the model's response and the documents from which the response was created. That means we now have provenance, which brings us much closer to realising Lanier's vision of data dignity.

Google's "AI-powered overview" feature is a good example of what we can expect with RAG. Since Google already has the world's best search engine, its summarisation engine should be able to respond to a prompt by running a search and feeding the top results into an LLM to generate the overview the users asked for. The model would provide the language and grammar, but it would derive the content from the documents included in the prompt. Again, this would provide the missing provenance.

Now that we know it is possible to produce output that respects copyright and compensates authors, regulators need to step up to hold companies accountable for failing to do so. We should not accept leading LLM providers' claim that the task is technically impossible. In fact, it is another of the many business-model and ethical challenges that they can and must overcome.

We are only just beginning to see what is possible with this approach. RAG applications will undoubtedly become more layered and complex. But now that we have the tools to trace provenance, tech companies no longer have an excuse for copyright unaccountability.


Mike Loukides is vice president of content strategy for O'Reilly Media, Inc.


Tim O'Reilly is founder and CEO of O'Reilly Media, Inc and visiting professor at University College London Institute for Innovation and Public Purpose.


Views expressed in this article are the authors' own. 


Follow The Daily Star Opinion on Facebook for the latest opinions, commentaries and analyses by experts and professionals. To contribute your article or letter to The Daily Star Opinion, see our guidelines for submission.

Comments

ভারতে বাংলাদেশি কার্ডের ব্যবহার কমেছে ৪০ শতাংশ, বেড়েছে থাইল্যান্ড-সিঙ্গাপুরে

বিদেশে বাংলাদেশি ক্রেডিট কার্ডের মাধ্যমে সবচেয়ে বেশি খরচ হতো ভারতে। গত জুলাইয়ে ভারতকে ছাড়িয়ে গেছে যুক্তরাষ্ট্র।

২৪ মিনিট আগে