Leveraging Generative Artificial Intelligence in Business: Operations and Risks

Authors: Harnik Shukla, Partner; Vlad Teplitskiy, Partner; Justin Theam, Associate, Knobbe Martens

Does your software team rely on ChatGPT for coding?  Or perhaps your sales and marketing departments are harnessing its power for product campaigns?  Generative artificial intelligence tools (AI tools), such as ChatGPT, continue to become integral for day-to-day business operations.  However, without an internal artificial intelligence usage policy (“AI Policy”) in place, these tools can expose the business to significant legal and financial risks.

To draft an AI Policy, a business should first start by auditing all the possible use cases for the AI tools and the types of data that could be entered into the AI tools.  Generally, the risks of using AI tools fall under the following three categories: 

  1. Loss of Confidentiality (for instance, sensitive data, trade secrets, etc.)

  2. Legal Liabilities (for instance, breach of contract, copyright infringement, etc.)

  3. Loss of Intellectual Property (IP) Ownership (for instance, copyrights and patents)

The following example use cases explore how AI tools implicate these risks.
 

Use Case 1: Using an AI tool to streamline internal operations, such as preparing a presentation or manipulating data for an internal meeting. 

"Even if the TOU is acceptable, businesses should reconsider inputting any sensitive data into the AI tool that stores the inputted data to avoid any data loss in case there is a security breach of the AI tool or in case the data may otherwise get leaked." 

Risks: The data input may be stored by the AI tool to be used by others or for internal use of the AI tool maker to refine the AI tool, thereby implicating data security and confidentiality.  The AI tool’s Terms of Use (TOU) dictate how the AI tool can use data entered into the AI tool.  Businesses should avoid using any AI tool with TOU that fails to satisfy the business’s data security and confidentiality requirements or use such AI tools only when confidential data is not implicated.  Even if the TOU is acceptable, businesses should reconsider inputting any sensitive data into the AI tool that stores the inputted data to avoid any data loss in case there is a security breach of the AI tool or in case the data may otherwise get leaked.


What happens when an employee inputs financial records to generate a quarterly fiscal report or inputs a list of contacts (such as employees, third-party partners, or clients) to organize an internal database?  Depending on the TOU, the AI tool can use this information as training data or even an output for other users.  For example, if another user prompts the AI tool for competitor contracts or fiscal reports, that user may learn that your business meets those criteria and use your confidential data to their advantage.  Furthermore, if the data inputted into the AI tool conflicts with confidentiality agreements or contracts with third-party partners, your business could be liable to such partners. 

Use Case 2: Using an AI tool to generate a work product, such as software code or a slogan/image for a marketing campaign.

Risks: Commercial use of the output can lead to legal liabilities to third parties for improper use of their IP rights (e.g., copyrights, trademarks, and patents), rights of publicity, and/or the licenses related to them.  

The use could infringe when protected materials are incorporated directly into the AI tool’s output.  For example, a restaurant may be liable for using a generated marketing image for their restaurant that includes third-party protected material even if the business believed the AI tool provided licenses to the image.   Envision an image with Kobe Bryant in Nike® clothing holding a Coca-Cola® bottle in a restaurant reminiscent of Cheesecake Factory®.  The restaurant should have screened for any protected material before using the output. 

Although third-party material can be more difficult to identify when an employee uses the AI tool to develop software code, similar implications can still arise since third-party code included in the output may be copyrighted.  Even if the third-party code is licensed under an open-source software license, many such licenses impose obligations on the use of the code, such as providing notices.  

"In some circumstances, the AI tool provider may provide a limited indemnity for liability stemming from the unauthorized use of third-party IP included in the output. It is important to review the TOU of the AI  tool to determine whether it offers such limited indemnity."

In some circumstances, the AI tool provider may provide a limited indemnity for liability stemming from the unauthorized use of third-party IP included in the output.  It is important to review the TOU of the AI tool to determine whether it offers such limited indemnity.  

Even if legal liability is not a concern, the business may not have full ownership of the output.  The TOU may include terms by which the AI tool provider retains rights in the output or retains a license to grant other users of the AI tool the right to use the output.  Furthermore, a user’s rights to the output may be limited if they are out of compliance with the TOU, which can be changed at the discrection of the AI tool provider.


Use Case 3: Using an AI tool for innovation and product design, where the business intends to obtain exclusive rights to prevent others from copying.  

Risks: Several recent cases and agency guidance have stated that an AI tool cannot solely generate copyrightable works and cannot solely own a patent for innovations the AI tool generates.  This guidance could mean that innovations or expressions generated or authored solely by the AI tool may not be protectable under patent or copyright law.  Because AI tools are quite new, IP law for patent and copyright protection of material produced with the help of AI tools is still in flux, and the level of human contribution required to obtain IP protections is undetermined.  Under current copyrights laws, only portions of a work that are independent products of human authorship are protectable.  For example, a graphic novel or software code developed with AI tools may have limited copyright protection. 

"To ensure the ability to secure patent rights and/or copyrights for AI-generated products, businesses should maintain a mindset of using AI tools as assistants, not as creators."

To ensure the ability to secure patent rights and/or copyrights for AI-generated products, businesses should maintain a mindset of using AI tools as assistants, not as creators.  Although AI tools can provide a shortcut to the end product, businesses can avoid IP protection issues by limiting AI tools as assisting, rather than replacing, their employees to maintain the human component.  Products created with the assistance of AI tools should be products that a business could have created or would have known how to create without AI tools. 


Use Case 4: Using an AI tool for direct consumer interaction, such as customer service chat boxes or navigation systems, or using the AI tool as an information source for product design or maintenance. 

Risks: The information an AI tool outputs can be wrong; a recent study found that, in some cases, the accuracy of ChatGPT fell from over 95% to under 3%.[1]  The recent case of New York lawyers sanctioned for filing a legal brief containing fake cases output by ChatGPT serves as a cautionary tale.[2] Although AI tools can automate customer interaction or reduce staffing needs, they could provide inaccurate, misleading, biased, and even false information that may lead to legal liability, broader customer dissatisfaction, or brand degradation if provided to consumers. 

"Businesses should develop internal protocols to routinely check the output of AI tools and provide human oversight when a consumer is dissatisfied with the AI assistance."

Businesses should develop internal protocols to routinely check the output of AI tools and provide human oversight when a consumer is dissatisfied with the AI assistance.  Furthermore, businesses can provide clear indications to consumers that they are interacting with an AI and provide ways to ask for human intervention. 

 

Recommendations
To maximize benefits and minimize risks, businesses need to be aware of potential legal pitfalls and work with legal experts to craft an AI Policy that allows generative AI technology to be an asset rather than a burden.  Businesses should develop an AI Policy that can adapt to ever-changing regulations and stay updated on legal developments for generative AI.  The AI Policy should, at a minimum, address the following issues.

AI Tool Terms of Use
Businesses should thoroughly review the TOU for any AI tool used by the business to ensure that the terms are acceptable.  The entire TOU should be reviewed with particular attention on the rights for the output, confidentiality of user data, restrictions, compliance, attribution, liability, and licenses offered.  The TOU is typically subject to change, so it is important to stay current and review all changes to the TOU.  

Safety Checks
Businesses should incorporate safety checks to avoid unintentionally using third-party material or inaccurate/biased information.  A human team or inspection software could identify potential issues.  For example, Copyleaks or GPTZero indicate if the content, such as writing or code, is generated by AI.  Other AI tools may include options to exclude open-source code from the output. 

Employee Compliance
What if employees or contractors still want to use AI tools that do not fit the business’s AI policy?  While it is difficult to stop every unauthorized use, certain measures can be taken, such as including terms and conditions in employment or consultant agreements or contracts to prohibit using AI tools without the business’s consent.  Furthermore, to the extent company servers are used, firewalls can be implemented to prevent access to prohibited AI tools.  To the extent AI tools are allowed, AI policy should provide clear guidance on the permitted AI tools and their appropriate uses.   

For more information on the intellectual property legal services of Knobbe Martens, please visit the firm's website: https://www.knobbe.com/.

[1] Chen, L., Zaharia, M., & Zou, J. (2023). How Is ChatGPT’s Behavior Changing over Time? arXiv. https://doi.org/https://arxiv.org/pdf/2307.09009.pdf

[2] https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/

Harnik Shukla, Partner; Vlad Teplitskiy, Partner; Justin Theam, Associate Attorney, Knobbe Martens Orange County, California.