Five Frequently Asked Questions on Generative AI and Copyright

The recent uptake of generative AI systems such as ChatGPT, DALL-E and Midjourney sparks numerous legal questions, especially concerning copyright.

The recent uptake of generative AI systems such as ChatGPT, DALL-E and Midjourney sparks numerous legal questions, especially concerning copyright. As an in-house legal counsel, navigating these questions may be essential for protecting your company's interests and minimizing risks. This blog post addresses five questions on generative AI and copyright commonly encountered by businesses, offering straightforward answers to each.

  1. Can my company claim copyright on content created with generative AI?

As is often the case in law, the short answer is: it depends. In order for any content to be protected by copyright, it must show signs of human creativity. This means that content which has been purely generated by AI without human intervention (“AI-generated content”) shall not be eligible for copyright protection. Content that has been produced with the assistance of AI (“AI-assisted content”) may however be copyright protected if it can be shown that the content is the result or expression of the intellectual creation a human author. This is the case if the author has been able to express his creative abilities in the creation of the work by making free and creative choices. If you want to claim copyright on content created with generative AI it is therefore important to use the AI system only as a tool in the creation process and to document all creative choices made in the conception, execution or redaction stage (e.g. which data will serve as input? which AI system is chosen? How was the content edited or what was added by the author(s)? etc.).

This does not however mean that your company will be able to claim copyright on all AI-assisted content created within the company. Since copyrights are vested in the creator of a work, content created by employees or freelancers under the employment contract will usually sit with the employee or freelancer and not with the employer (unless in case of software, databases and designs). That is why it is important to conclude appropriate copyright transfer agreements with all personnel.         

  1. What are potential risks of using generative AI in my company for creating content?

First of all, there is a very clear risk relating to the trustworthiness of content (e.g. text outputs) created with generative AI. If generative AI is used in the company to create content, make sure to double-check the output, as not everything produced by an AI system will be reliable and AI system providers tend to exclude their liability for false statements to the broadest extent possible.

Secondly, content created with AI may be infringing copyright if the original traits of a protected work can be recognised in the content. This is not unlikely, given that AI systems are often trained with datasets that include copyright protected works , and it is usually unclear to the user if  prior authorisation was given by the rightsholders. Mere resemblances in style of AI-generated content and copyright protected works do not lead to a copyright infringement, as elements of style are excluded from copyright protection. If you however consider imitating a person’s voice or appearance  with AI (e.g. in deepfakes), think twice, as personality rights and image rights may prevent you from doing so. This is, however, if no specific exceptions to copyright or personality rights would apply, such as for example “parody” or the particular “newsworthiness” of the (fake) content.

Third and last, are the risks of unauthorised disclosure of confidential information and/or trade secrets, as well as unlawful processing of personal data. When using generative AI for creating content, please be mindful about not disclosing any confidential information, trade secrets or personal data in prompts.                

  1. Should my company be transparent if content was created with generative AI?

For now, no specific legal obligation exists which requires companies to be transparent to third parties about the fact that certain content was created using generative AI. Nevertheless, both the general duty of care to which companies are subject and the prohibition of misleading market practices may require you to clearly indicate that certain content was created with generative AI.  The question to be asked here is: does the AI-generated content risk misleading the public? If this is the case, transparency is required. Moreover, the upcoming AI act does include some transparency obligations, including the obligation for deployers of AI systems that generate deep fakes to disclose that such content has been artificially generated or manipulated. If the deepfake however forms part of an “evidently artistic, creative, satirical, fictional analogous work or programme” it should only be disclosed that the content is AI-generated or manipulated in an appropriate manner that does not hamper the display or enjoyment of the work. Additionally, transparency obligations may apply in certain sectors (see for example the Journalistic Code of the Council for Journalism) or may have been imposed by the terms and conditions of the generative AI system provider. Determining whether or not a transparency obligation is in place, will be a case-by-case analysis.                 

  1. How to avoid that my company’s protected works or confidential information are used as training data?

Avoiding that your company’s protected works or confidential information is being used as training data by generative AI systems starts with not feeding AI systems with such works or information. Therefore it is important to raise awareness among staff and implement appropriate company policies on permissible uses of generative AI systems. But also in contracts with customers or business-partners appropriate clauses should be foreseen to keep them from feeding your company’s works or confidential information. In addition, companies (i.e. rightsholders) that do not want their publicly available content to be “mined” to serve as training data for generative AI systems, may explicitly “reserve” the use of their works (read: exclude from text and data mining) “in an appropriate manner”. In case of content made publicly available online such reservation will under Belgian law only be considered appropriate if it is made using “machine-readable means”.  On a practical level, it might however often be difficult to find out if your company’s works have been used as training material for an AI system. In this respect, it is worth mentioning “Have I Been Trained”, a tool which may be used to check whether a work has been included in “LAION”, the Stable Diffusion V3 database or “Spawning”, a related tool which allows users to make an opt-out request that would lead to the removal from the database. If a non-publicly available generative AI system is rolled-out in the company or a “professional-use” license is obtained, it is also worth negotiating with the AI system provider on (a prohibition of) the use of the company’s protected works or confidential information as training data.             

  1. Which concrete measures can my company take to alleviate the risks?

First of all, it is fundamental to implement a robust company policy on the use of generative AI, which focuses both on the (documenting) requirements for protectability of content created with generative AI, as well as guidelines for permissible uses of generative AI and the risks of disclosing confidential information, personal data, trade secrets or copyright protected works. Investing in trainings for employees on “best-practices” can moreover enhance understanding and adherence to such guidelines. Looking at the employee-side of the matter is however not enough. Contractual arrangements with AI system providers, content providers and receivers are required too if you want to avoid copyright infringements or unauthorised uses of the company’s protected works. Additionally, performing (automated) content reviews and implementing measures against unauthorized text and data mining may be useful.                

Have you noticed that generative AI is being widely used in your company without any measures being in place? Are you uncertain about the best approach to safeguard your company’s interests? Feel free to get in touch:

Liesa Boghaert, Timelex

More Partner Blogs


26 April 2024

Which companies have the obligation to introduce an internal reporting channel for whistleblowers?

The European Whistleblower Directive was transposed into Belgian legislation end of 2022 (Act of...

Read More ...

25 April 2024

A new European Commission proposal on foreign direct investment screening: towards greater harmonization?

On June 20, 2023, the European Commission and the High Representative for Foreign Affairs and...

Read More ...

22 April 2024

Considerations when contracting about AI-sytems

With the recent approval of the AI Act by the European Parliament in mid-March, it is crucial to...

Read More ...

19 April 2024

Drowning in Data? Tactics for Legal Professionals to Conquer the Information Overload

Welcome to the exciting world of increasing laws and regulations, where each choice proves how...

Read More ...

17 April 2024

EU enhances consumer rights by banning eco-generic claims and early obsolescence

The EU legislature recently adopted a new Directive aiming to empower consumers through better...

Read More ...