Corruption News

AI Risks in M&A Transactions

0

As firms across all industries continue to embrace the development and use of AI, this technology threatens to upend the merger and acquisition process by introducing a new, unpredictable element. Liza Kirillova and Adam Bingham, both associates at Michelman & Robinson, explore what due diligence means in the AI age.

Many companies are increasingly reliant on artificial intelligence (AI) tools to manage internal operations and generate consumer-facing content. As such, corporate counsel and risk professionals involved in shepherding M&A transactions for buyers must be aware of potential risks posed by the use of AI when conducting acquisition due diligence

That’s because AI can generate unknown or undisclosed liabilities — anything from the inadvertent disclosure of confidential information to unlawful discrimination and even copyright infringement — that a buyer may inherit and should protect against through contractual safeguards, especially given that AI constitutes an emerging technology with minimal regulation or common law guidance.

Companies have begun deploying AI tools — ChatGPT, Dall-E and Fireflies, among them — for operational management in a number of areas that could affect the bottom line in an M&A transaction. Internally, AI assists with recruiting processes, calendar planning, optimizing supply chains and analyzing consumer satisfaction. Externally, AI can be leveraged to create advertising, audio and visual content creation, customer service protocol and product recommendations. As they are more widely adopted, these tools will continue to evolve, as will the risks they pose and the legal landscape governing those threats.

Cybersecurity

AI can lead to a novel point of cyber vulnerability when it is trained on or has direct access to sensitive corporate information. There are at least two readily identifiable categories of risk. The first involves the development of in-house AI tools, particularly where a combination of publicly available and internal corporate data sets are used in the training of closed AI systems. Drawing on publicly available data sets in such training can create a vulnerability to data poisoning attacks — where malicious third parties alter underlying data to create backdoor access or other exploitable flaws, leading to compromise by outsiders.

The second area of concern involves employee use of publicly available AI tools. As employees more regularly turn to ChatGPT and the like, they may intentionally or unintentionally disclose confidential information to these large language models. This class of AI typically retains user input for future use in ongoing retraining; therefore, where employees release client and/or corporate information, that data — sensitive or otherwise — may be exposed. Employee reliance on these sorts of AI tools could result in violations of privacy laws as well, ranging from state level consumer privacy laws to federal HIPPA violations.

When conducting due diligence, counsel for any given buyer should confirm that the seller’s cybersecurity measures and employee training/policies adequately protect against AI-driven data breaches. Where AI is developed internally, audits of the training data can enable cautious buy-side counsel to identify and understand potential cyber risks. Similarly, a buyer should always be mindful to examine the licenses that grant a seller access to AI tools, including the terms of use and assignability of the program. If a licensing agreement contains a non-assignability provision that bars the seller from assigning its rights, that could lead to a breach of contract in the wake of an impermissible transfer.

At a minimum, a buyer’s counsel should inquire about any AI instruments used by a seller, the method in which the tools are leveraged and the categories of data (if any) made vulnerable as a result. With this information, a buyer can assess whether these protections are sufficient or whether they create potential liabilities for cyber breaches and/or privacy violations.

Litigation & compliance 

Intellectual property issues are among the greatest areas of legal concern involving AI. The ownership rights of the derivative work product created by generative AI and the permissibility of use of data for AI training are uncertain and largely unregulated, ultimately resulting in legal risks for buyers. Regulatory guidance in this area remains in its infancy; on March 15, 2023, the US Copyright Office, in its first official announcement on the topic, declared that works created with the assistance of AI may be copyrightable, provided the work involves sufficient human authorship. 

Still, the extent of human authorship required for copyright protection remains an open question, so when contemplating an acquisition, a buyer’s counsel should request that the target company disclose any use of AI in connection with copyrightable works.

Further, the EU General Data Protection Regulation (GDPR) provides protection to certain classes of people, such that the use of their data in AI training may infringe upon the rights of those protected by the GDPR. As referenced in greater detail below, a buyer should always consider including in any M&A deal a representation and warranty that the seller’s work product does not infringe upon the intellectual property rights or privacy rights of third parties.

Internal-facing AI tools used in hiring and HR processes can also expose a buyer to AI-related litigation and compliance risks. For instance, algorithmic bias in AI-based hiring and recruitment processes has the potential to unlawfully discriminate against protected classes like race, ethnicity, age or sex. Not only can algorithmic bias expose a company to employment lawsuits, but it may also lead to ethical issues and reputational harm. A buyer can protect itself by requesting that the seller provide an audit report of past employment practices to verify that the seller’s AI tools comply with federal and state labor laws.

The risks continue: External facing AI tools used in marketing could damage a seller’s reputation, undermine consumer trust or subject the seller to lawsuits if AI-driven content generates deceptive or exaggerated claims in marketing or advertising materials, thus exposing an acquiring company to lawsuits based on false advertising and misrepresentation. As part of the diligence process, a buyer’s counsel should review any advertising material that was created in whole or in part by AI to identify exaggerated or deceptive claims.

Finally, as part of a buyer’s comprehensive due diligence, counsel should conduct a thorough assessment to determine whether the target’s use of AI infringes upon the terms of the buyer’s existing commercial contracts, encompassing confidentiality provisions, intellectual property rights, non-compete agreements and data security provisions, among other contractual obligations.

Contractual safeguards

When papering an M&A deal, buyers should consider extensive representation and warranty language in their purchase agreements that specifically address AI-driven concerns. For instance, a buyer’s counsel can negotiate language providing that (i) the seller has the right to transfer the use of AI tools; (ii) the seller maintains ownership of their work product; (iii) the seller’s work product does not infringe upon the intellectual property rights of others; and (iv) the seller adheres to data privacy laws. Such language would encourage the seller to provide comprehensive disclosure schedules regarding the state of its AI technology and apply pressure on the seller to reveal any known risks related to AI. If any liabilities associated with these issues surface after the transaction closes, the buyer may have legal recourse against the seller under the terms of the M&A agreement, relying on indemnification clauses, representation and warranty insurance, holdbacks, escrows, or purchase price adjustments.

In closing

The emergence of AI requires even the most seasoned attorneys and risk professionals to modernize their due diligence practices. Likewise, when drafting deal documentation, a buyer’s counsel should ensure that the operative agreements adequately compel the seller to provide transparent information about the target’s use of AI technologies in its business and products. Collectively, this approach enhances the buyer’s ability to make informed risk decisions while protecting itself against unanticipated liabilities.

Michael Poster, partner-in-charge of the New York office of Michelman & Robinson, and Ashley Moore, the firm’s Dallas office managing partner, contributed to this report.


Source link

Leave A Reply

Your email address will not be published.