Home / Generative AI: Generating Legal Headaches?
Generative AI: Generating Legal Headaches?
- January 3, 2023
- Shuva Mandal
The year 2022 saw major breakthroughs in the field of “generative” Artificial Intelligence. This field is different from the more traditional “discriminatory” AI models, whose algorithms rely on the datasets they are fed during “training” to make decisions. By contrast, “generative” AI models are forced to make conclusions and draw inferences from datasets based on a limited number of parameters given to them during training. In other words, generative AI uses “unsupervised” learning algorithms to create synthetic data. The output of generative AI includes digital images and videos, audio, text or even programming code. In recent days, even poetry, stories, blog posts and art work have been created by AI tools.
Generative AI: The Socio-Economic and Legal Problems
Like every technology, generative AI too has pros and cons. While it has made it easy to create various kinds of content at scale and in much shorter timeframes, the same technology has also been used to create “deep fakes” that then go viral on social media.
OpenAI’s image generator platform “DALL-E 2” and automatic text generator GPT-3 have already been used to create art work and other text-based content. GPT-4, which is expected to be far more powerful and advanced, is expected to be released in 2023. Until recently, Open AI did not allow commercial usage of images created using the platform. But it has now begun to grant “full usage rights”- which includes the rights to sell the images, reprint them and use them on merchandise.
Generative AI has the potential to open a Pandora’s Box of litigation. A class action suit has already been filed against OpenAI, Microsoft and Github alleging copyright violations by Copilot, Github’s AI-based code generator that uses OpenAI’s Codex model. The argument behind the suit is this: the tool uses hundreds of millions of lines of Open-Source code written, debugged, or improved by tens of thousands of programmers from around the world. While these individuals support the Open- Source concept, code generators like Copilot draw on their code (which was fed to it during its training) to generate code that may well be used for commercial purposes. The original authors of the code remain unrecognized and do not get any compensation.
A similar situation can easily occur with art work created using AI-based tools because all that such tools need to create a digital image is a text prompt. For example, Polish artist Greg Rutkowski, known for creating fantasy landscapes, has complained about the fact that just typing a simple text like “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski” will create an image that looks quite close to his original work. The smarter text recognition and generative AI get, the simpler it will be for even lay people to use. Karla Ortiz, a San Francisco based illustrator is concerned at the potential loss of income that she and her fellow professionals might suffer due to generative AI.[1]
Sooner than later, this challenge will be faced by playwrights, novelists, poets, photographers and pretty much all creative professionals. Indeed, AI tools could conceivably put writers out of business in the next few years! AI generators are “trained” using millions of poems, images, paintings etc that were created by persons dead or alive. Their creators or their legal heirs do not currently have the option to exclude their works from the training datasets. In fact, they do not even usually know that their works have been included.
The creative industry itself is taking various steps to protect the rights of various categories of creative professionals. Such measures include the use of digital watermarking for authentication, banning the use of AI-generated images, and building tools that allow artists to check if their works have been used as part of any training datasets and then opt out if they so choose.
A more pernicious problem could conceivably arise when deliberately or inadvertently, misleading content is created and posted- and consumed by innocent users. Some early examples of such misuse have already emerged, and there is a genuine concern that if these activities are not nipped in the bud and information on the internet is not somehow authenticated, serious, unexpected, and large-scale damage may be caused.
Overhauling the Laws
In the US, AI tools may, for now, take legal cover under the fair use doctrine. But that applies only to non-commercial usage. Arguably, the current situation where researchers and companies building AI tools freely use massive datasets to “train” their tools violate the spirit of ownership and protection of IPR because these AI generators are also being used for commercial benefit. Also, as various lawsuits are already underway, changes to IPR and related laws will need to be made to explicitly enable AI. Not doing so will only impede the use of AI in various fields where such algorithms can deliver significant benefits by speeding up innovation.
References:
[1] https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/
Image Credits:
Photo by Tara Winstead: https://www.pexels.com/photo/robot-fingers-on-blue-background-8386369/
Like every technology, generative AI too has pros and cons. While it has made it easy to create various kinds of content at scale and in much shorter timeframes, the same technology has also been used to create “deep fakes” that then go viral on social media.
Related Posts

Authors’ Right to Receive Royalty for Underlying Works Recognised at Last

Online Dispute Resolution: A Game Changer

Enemy Property in India: An Overview
