Corruption News

AI Regulations Are Coming; How Should Companies Prepare?

0

Federal regulators have indicated they won’t make the same under-regulation mistake with AI that they made with data privacy regulation, which still doesn’t exist on the federal level. A group of experts from King & Spalding explore what companies need to know to get ahead of coming regulations.

Generative AI has captured the world’s attention, and regulators are no exception. Indeed, the FTC and other regulators are increasingly active on multiple fronts — from large scale antitrust enforcement efforts, to issues as small as online subscriptions and customer reviews. When it comes to generative AI, recent developments suggest that these same regulators are intent on pushing for strict regulation sooner rather than later to guard against what they perceive to be a threat to competition and consumer protection.

The Biden Administration has also acted, issuing an executive order seeking to curb some of the risks of this rapidly spreading technology. (Read more about the order.) 

Regulators keen to avoid ‘grave mistakes’ of under-regulation

In written remarks at the BBB’s National Advertising Division annual conference in September, the FTC’s Bureau of Consumer Protection Director Samuel Levine stated that the FTC would take “a comprehensive and proactive approach” on generative AI legislation. Levine contrasted this approach with the “grave mistakes” on policy choices regarding privacy regulation, where the FTC waited on Congress to enact legislation, which decades later, still does not exist. 

And in early October, the FTC conducted a virtual roundtable on AI and content creation. In her opening remarks, FTC Chairwoman Lina Khan noted that there is no “AI exemption” and that the FTC intends to leverage all available current laws to address potentially unfair or deceptive business practices. She expressed specific concern that generative AI could “incentivize the endless surveillance of our personal data” and “turbocharge fraud.”

How should companies plan for, and react to, potential generative AI regulation?

Companies using — or contemplating using — generative AI should involve internal compliance and legal teams early and often to minimize potential data privacy risks that may be quickly evolving as the FTC attempts to keep up with generative AI technology. In-house legal teams can assist by identifying relevant stakeholders, company policies and changing data privacy or other relevant laws and regulations.

In the event a company determines an internal or government investigation is necessary, keep the following in mind as the scope and scale of the investigation is evaluated:

  • Identify company policies that may be relevant to the investigation process, as these should be carefully considered and any deviations documented before the investigation is launched. Bear in mind that as rulemaking emerges, there may be some lag time between the implementation of those changes and making necessary updates to policies and procedures. In that case, be sure your policies and procedures have appropriately flagged places where the business should check-in with legal and compliance about the latest state of the law and rules.
  • Determine scope and purpose. As much as possible, relevant time periods, business units, employees and goals for the investigation should be determined at the outset. While the scope often expands or changes based on the facts identified, it may be counterproductive and unduly burdensome on the company for the investigation to take on an unnecessarily broad scope, which may impede the efficient and timely resolution of the critical issues.
  • Preserve data and consider the challenges with AI-related data, especially any underlying training data used in building the AI-model. Whether the investigation stems from a subpoena or other legal process, steps should be taken immediately to preserve all potentially relevant data and documents. Engage stakeholders early to understand the scope of what data may be in play (e.g., how data was sourced, model evaluation metrics, training and evaluation scripts). Recommended steps often include implementing back-end holds on emails and other electronically stored documents, communicating preservation obligations to employees and relevant parties (e.g., board members) and suspending routine document retention and/or deletion policies. Given the prevalence of communicating by text message or other messaging applications, as well as the recent focus by regulators on obligations to preserve and produce such communications, consideration of whether to collect data from employees’ mobile devices also may be necessary.
  • Define roles and structure. Decisions about who should be directing, conducting and/or receiving updates about investigation findings are critical for any matter. This includes identifying whether key stakeholders may be witnesses to relevant facts, whether the independence and integrity of the investigation will receive scrutiny (and, relatedly, whether outside counsel should be retained), whether the internal investigation will be conducted at the direction of counsel under attorney-client privilege and what company personnel will need to assist with fact gathering. When it comes to generative AI, given the specialization and engineering involved in some programs, special consideration should be given to subject matter experts within the company that can assist with understanding how the technology operates in the specific environment at issue.
Emily Apte, counsel in King & Spalding’s Austin office, advises and defends clients in complex white-collar criminal and regulatory matters involving federal government, state government and internal investigations, as well as provides crisis management counseling. 
A partner in King & Spalding’s Austin office, Grant Nichols focuses on government investigations, independent investigations and complex white-collar criminal defense matters. 
Luke Roniger, a senior associate in the firm’s trial and global disputes group in Austin, focuses his practice on complex civil litigation and international arbitration. 
Nicholas “Nick” Maietta, an associate in King & Spalding’s data, privacy and security practice in Washington, D.C., focuses on artificial intelligence, privacy and security compliance. 


Source link

Leave A Reply

Your email address will not be published.