“Generative AI refers to artificial intelligence systems that are capable of generating new content, such as text, images, or audio….One potential risk is related to intellectual property. Generative AI systems may be able to create original works that are difficult to attribute to a specific creator. This could make it difficult to enforce copyright or patent protections for these works.”
The above, rather drab, opening lines have been generated with the new “talk of the town” generative AI system Chat GPT. The human reporter started the conversation about the legal risks of Chat GPT and similar generative AI systems. But the large language model has started off the conversation with its introduction ad not going into details of legal risks, has simply put general risks of such systems.
As part of its full response, the risks include risks to intellectual property (that we’ve mentioned above), risks to reliability, risks about legal and ethical issues in artificial intelligence. Nowhere in the response was mentioned about the laws on AI, or the legislations related to generative AI content. Looks like ChatGPT is still in the breaking out about legislative risks and compliance concerns that are being talked about by industry experts.
Is it too early to talk about the laws?
Actually It’s too late. This brings us to the much debated question about Laws on AI. Industry experts are whispering questions everywhere that is the use of content generated by Chat GPT, all legal to use? This question arises because of the mechanism underlying the training of generative AI systems. Most machine learning algorithm work on the basis of identifying patterns of data and replicating those learned patterns. Chat GPT is trained by both supervised and unsupervised learning techniques. Naturally, all the content that was the model trained on, is user generated, mostly scrapped from the web, and in many cases the data must be copyright protected in one way or the other. The content we are talking about is the input and well as the resulting output from the model that is trained on user data.
Generative AI has the capability to replicate a person’s way of thinking and thus generating content. Take the example of Hollie Mengert, an illustrator and Disney who discovered that her style is being replicated under an experiment in Canada. The model was trained with 32 pieces of Mengert’s original work for a few hours. As a result, the machine learning model was able to replicate Mengert’s style on a new concept. Andy Baio, a technologist, reported the case. Mengert told Baio that She has learned all the art work in graduation and has been working as an artist since 2011. Thing’s that she has learned and done is not being used by someone else to create things of her own style for which she hasn’t given any consent or any permission.
Seems like indeed it looks unfair. But the question is: Can Mengert or all of those illustrators, writers, songwriters whose work is being replicated can do anything about it?
To answer these questions about copyright infringements, a couple of experts have given their two cents on the topic. Even though any verdict on the said topic is too early as yet, it is never too late to initiate a debate that is effecting the artist community with generative AI. Some experts are of the opinion that these systems are a clearly capable of copyright infringement and could face serious legal challenges. However, some experts’ opinion has been the opposite. They suggest that anything created by generative AI systems is legally above any lawsuit. Both the perspectives are equally firm in their space. As of now, anyone who claims they know how this will play out in court is telling the wrong thing.
Andres Guadamuz, an academic from UK’s University of Sussex specializes in AI and Intellectual Property (IP) Law suggested that in the midst of the unknowns about the copyright infringements debate, there are just a few key questions that will eventually clear the whole picture. First, can an Output of the generative AI system be copyrighted? If so, who owns it? There are multiple professionals involved with the creation of output from a generative AI, like the developer or its operator (the person who wrote the initial search query). Second, if anyone is the owner of a work that is copyright, can it be used as an input to a training model. Further, there can be questions like what kind of legal constraints can be put on data collection? And what legal actions can be taken against those who violate the terms of data collection.
Can an Output of the generative AI system be copyrighted?
Multiple researchers have different stance when it comes to this question. Guadamuz suspects that in US, it’ll be not a big problem since in the US, copyright infringements are not directly taken to the court. So if a user types the query “cat by van Gough”, the result will be an illustration that is not enough to get copyrights in the US. Registering your work for copyright is the first step. If someone is going to sue someone for any violations, it is the court that will decide whether copyright will be enforced or not.
According to experts, the laws a bit different in UK and copyright cases might be handled with a verdict more favorable to artists. It is because, UK is one of the nations who offers copyrights on works that are produced using a computer. Thus it offers some precedence for copyright protection to be granted
The Technical Risks of Chat GPT: How Threat actors can use it for the evil
With models like Chat GPT, it has become increasingly easier for cybercriminals to generate sophisticated emails that are harder to detect. Threat actors can launch a tonne of cyber-attacks for varied purposes. Let’s explore the possibilities:
- Phishing attacks
Such attacks would become more sophisticated with more convincing emails. A serious concerns arise if the bot is used for automating phishing attacks from generating emails to collecting sensitive information. This could result in financial losses and he chaos regarding whom to blame.
- ChatGPT can be used for malicious activities:
Even with its enormous benefits, technology can be used by bad actors for their bad intentions. Thus, there’s a possibility that ChatGPT can be used for malicious activities including executing cyberattacks without leaving a trail.
- Business Email Compromise (BEC):
Business Email Compromise is one way how Chat GPT can be used to launch campaigns. Generally, there are common formats when they launch their campaign. Chat GPT is a system that knows how to craft unique content. It is aware of the process of generating any type of attack that is required by a user. For example, a payroll diversion BEC attack is one attack that relies heavily on impersonation, social engineering and urgency. With Chat GPT, this has become a piece of cake.
And the list goes on… It seems like the technical risks of Chat GPT are inevitable. Experts are whispering concerns all over the world about generative AI and its potential to create havoc.
Chat GPT has given rise to Plagiarism issues: Educational institutes’ major concern
From illustrators, to writers and now academics – everyone is having concerns with the content of this new model. Plagiarism is considered a grave issue in educational institutes when it comes to students’ assignments, projects and other academic submissions. The core concept of assignments is to make students learn new things and generate their own version of the assigned task with some novelty and that displays their learning. With systems like Chat GPT, students are able to generate now content and depict that as their own work. Not only it has plagiarism issue but also ethical issues.
Plagiarism due to Chat GPT is one case that the institutes have to look answer to. Another major issue is ethical aspects of things. The model is trained based on someone else’s piece of writing or simple web scrapping. Naturally, the resultant piece of content as the putout of a query has the same concerns that Mengert had over her illustrations. Like illustrators, writers too have a style. Is it ethical to generate content over someone else’s writing style and let the students submit to their university professors?
Conclusion
The battle against the legislation aspects of Chat GPT has just begun. From illustrators, to writers, academic peers to security professionals, people are showing concerns with their area of knowledge. In essences, such bots are introduced for the “better good” of humanity. However, looking at the legal, technical and ethical concerns, it looks like Chat GPT and other generative AI systems must be put in practice only after a fair amount of laws on AI are passed. Something similar to the GDPR must be introduced after sheer consultation from industry professionals, researchers and legislation experts. Only then can modern tools like Chat GPT can be made beneficial for humanity in true sense.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.