Corruption News

AI Is the Wild West, but Not for the Reasons You Think

0

As Europe moves closer to blanket rules regarding its use, CCI’s Jennifer L. Gaskin explores the evolving compliance and regulatory picture around artificial intelligence, the technology everyone seems to be using (but that we’re also all afraid of?).

Many observers have compared the current legal and regulatory state surrounding artificial intelligence to the Wild West, an era heavy on gunplay and light on rules. 

It’s an apt comparison, but not because there were no laws in the so-called Wild West or because no laws govern AI now. (Just ask Rite Aid or Clearview AI whether regulatory agencies are closely monitoring how companies use artificial intelligence.) Rather, it’s the fractured nature of laws being applied to AI or even written specifically for AI that are the proper analog for an American frontier where justice was meted out by roving judges, sheriffs and bounty hunters, though the authorities setting guardrails on AI are usually far less trigger-happy.

In November, just over a year after OpenAI’s release of its popular ChatGPT generative AI chatbot, the European Union made its opening regulatory move, agreeing on landmark legislation, the AI Act, a measure first proposed in 2021 but rewritten multiple times as AI continued advancing by leaps and bounds.

The act, which cleared one of its final hurdles just a few days ago, won’t go into effect until 2025, though, which gives the technology additional time to keep getting better — and perhaps scarier, as jobs across all industries are replaced by AI. A ResumeBuilder survey in late 2024 found that more than one in three companies had used AI to replace human workers in 2023 and that another 24% planned to start doing so in 2024.

Outside of questions of regulation and what’s strictly legal, ethical dilemmas remain, including whether using AI matches the lofty goals companies like to tout in their corporate social responsibility statements, which usually involve things like making the world a better place. And even once those ethical questions are sorted, risk threatens to run rampant as technological advancement and adoption far outpace regulation. 

“Guns enhance danger, so when they’re used to commit crimes, sentences are more severe. Like a firearm, AI can also enhance the danger of a crime.”

Deputy Attorney General Lisa Monaco

The regulatory rodeo

The EU’s advancement of its AI Act means the Brussels effect is alive and kicking, as it has been for more than a decade, thanks to the EU’s landmark, standard-setting data privacy regulation, GDPR. Whether the AI Act will similarly create a regulatory framework that can be copied and pasted across the globe remains to be seen, and in the meantime, the fractious nature of current laws remains.

New rules have been proposed across the globe to establish guardrails around AI, and dozens of major economies, including the U.S., have signed on to the OECD agreement on AI published in 2019. Within the U.S., more than a dozen states have introduced standalone AI-related measures or updated laws already on the books, most often their consumer data privacy laws, to ensure they cover AI. And in 2023, the Biden Administration released an executive order that seeks to establish federal principles to guide the responsible use of AI.  

But even in the absence of laws written since the emergence of generative AI, law enforcement agencies have had their sights set on the technology. Clearview AI, the facial recognition software company, has been fined by multiple data protection authorities in Europe, American drugstore chain Rite Aid was banned by the Federal Trade Commission from using AI facial recognition, and Deputy Attorney General Lisa Monaco recently announced Justice AI, an initiative at the DOJ to address the use of AI in the criminal justice system. Monaco also warned about the risks of criminal AI use.

“The U.S. criminal justice system has long applied increased penalties to crimes committed with a firearm,” Monaco said during a February address at Oxford University. “Guns enhance danger, so when they’re used to commit crimes, sentences are more severe. Like a firearm, AI can also enhance the danger of a crime.”

The AI Act takes a risk-based approach to the technology, establishing four levels of risk for AI systems ranging from minimal to unacceptable, and the commission warns that “AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourage dangerous behavior.”

According to the commission, most AI systems currently in use fall into the minimal-risk category, and whether this new law in the EU will become a global standard remains to be seen.

“We’re really just starting to see both public and private industry implementing this emerging technology, so I think we still have a lot of ‘unknown unknowns,’” said Steve Ross, director of cybersecurity for the Americas at S-RM, a global corporate intelligence and cybersecurity consultancy. “As with any regulatory guidance, we’ll see governing frameworks and compliance requirements evolve in the coming years to more adequately account for some of those unknown risks posed by such technologies.” 

Dean Alms, chief product officer at Aravo, a third-party risk management software company, believes corporate use of these tools will also help define contours.

“As AI regulations are created, the community of experts and commenters invited to contribute will impact their acceptance and effectiveness,” Alms said. “Consistent governance and policies at the international, national and company level will help define acceptable and ethical use, but it’s likely going to take multiple attempts and require highly adaptable regulations to make it work.”

The horse is out of the barn

Formal regulation of AI lags far behind its feverish adoption. According to a U.S. Census Bureau survey, only about 4% of U.S. businesses currently use AI to produce goods or services, but that figure is expected to rise by nearly 70% this year, and AI tools are far more common in the information and professional services sector:

  • A Gartner survey released this month showed that more than two in five internal audit teams are using generative AI or plan to in 2024.
  • A January survey by FTI Consulting and Relativity found that three-quarters of general counsels expect their organizations’ legal departments to use generative AI. 
  • Financial crime compliance leaders anticipate increasing adoption of AI by 70% over the next two years, according to research by HFS Research and AML RightSource.
  • More than half (52%) of mergers and acquisitions and corporate finance lawyers are using AI for risk analysis/mitigation, while 49% are using it for valuations, according to consulting firm Berkeley Research Group.

While adoption is widespread, so are hurdles, from lack of talent to risk aversion. Global consulting firm RGP, in a survey conducted with YouGov, found that 45% of leaders say their organization has a skills gap in AI and automation, while a survey by Lighthouse, an eDiscovery and information governance software company, showed that legal teams had reservations about AI providers’ data security practices (50%) and the accuracy of AI (47%).

Still, AI is undeniable. A Goldman Sachs analysis in the fall of 2023 suggested that private investment in AI could reach $200 billion globally by 2025, and Goldman’s analysts likened the expected productivity boom to other disruptive developments like electricity and the internet.

A survey by Veritas, a cloud data management provider, of thousands of office workers from around the world found that nearly 60% use generative AI (genAI) tools at least once a week, and, worryingly, 25% have input personally identifiable information (PII) into public genAI platforms and 30% have input customer information, such as bank details.

Reining it in

Not every observer is a fan of the EU’s AI regulation, with some saying it is too prescriptive and others saying it doesn’t go far enough to, for example, outright ban technologies like facial recognition.

Patrick Bangert, senior vice president of data, analytics and AI at cloud technology provider Searce, is among those critical of the regulatory efforts thus far, citing the resulting patchwork of legislation that is emerging, among other critiques.

“Companies do not know what is permitted, who is responsible for damages or what penalties they are looking at for failures to be compliant,” Bangert said. “Success will only arise if these three fundamental challenges can be resolved. We suggest a regulation framework at the level of the United Nations that emphasizes clarity over and above the attempt to make exceptions for lobby groups.”

Rob Scott, co-founder of Monjur, a legal tech provider, praised the EU’s law but said regulators can’t take it easy just yet.

“The current regulatory efforts can be seen as a foundational step,” Scott said. “They provide a crucial framework for companies to align their AI strategies with ethical considerations and societal values. However, the rapid pace of AI development necessitates that these guidelines be revisited and revised regularly. This iterative process is essential to address emerging challenges and technologies.”

Indeed, the risk of misuse, especially regarding genAI due to its widespread availability, is high and growing, and something may be better than nothing when it comes to regulation. A Pew Research Center survey in the fall of 2023 showed that Americans seem ill at ease with the expanding role of AI in public life, with 52% saying they’re more concerned than they are excited about the prospects the technology promises. Notably, their concerns centered on privacy and not necessarily the technology itself.

In addition to fines and enforcement actions companies have faced in the U.S. and elsewhere, lawyers and activists are beginning to target the use of AI by both private companies and public entities. Macy’s, for example, is being sued by a man who was sexually assaulted while in custody over a false theft charge after he was identified by facial recognition tools; the charges against him were dismissed when it was determined he was in another state when the crime occurred.

“We have already seen adverse impacts on consumers from commercially available predictive AI models, which also tarnished the reputations of companies involved,” said Brad O’Brien, lead U.S. partner for global consultancy Baringa. O’Brien urges companies to ensure they have robust risk management processes that cover AI.

Of course, regulatory processes aren’t new, says RGP’s chief digital officer, Bhadresh Patel; they’re just being applied to new tech.

“I see a lot of parallels between the regulation of AI and government efforts to curb identity theft amid the evolution of fraud,” Patel said. “Both issues center around the misuse of public data or misrepresentation of people. At the end of the day, ethics are defined by businesses and humans, and I don’t think it’s any different for the use of AI.”

“ChatGPT and LLMs [large language models] are ravenous for data, but we cannot allow any genAI system to indiscriminately gobble up data as it trains.”

Gal Ringel, CEO & co-founder of Mine

The good, the bad & the risky

Liam Dugan, an AI researcher and doctoral student at Penn, studies large language models and how humans interact with them. He’s published work examining how good people are at identifying AI-generated material, and he recently extended that work to evaluating the effectiveness of automated tools designed to do the same. Dugan questions whether the regulatory provisions related to labeling such content, mandated for some types of material under the new EU law, will have much effect since neither humans nor machines are particularly good at that.

“For text content, we’re not at the point where we can reliably detect when people are not (labeling AI-generated content), and so it is very hard to enforce, in my opinion,” Dugan said. “Maybe for images and audio and video, this is more feasible, but at least for text, it’s still very, very difficult to reliably detect generated text, and even when we do, it’s very hard to [meet] legal standards of proof for this being generated text. It could be 70 percent sure, but it’s very hard to get 99.99, beyond reasonable doubt.” 

There’s widespread industry agreement that regardless of what regulations may require, corporate uses of AI must be governed by an ethical and risk management framework. 

“When talking about deploying AI in a business setting, it’s critical that we delineate between secure/proper deployment of AI as a tool and the ethics considerations and ramifications of deploying such tools, i.e., deploying AI properly does not necessarily guarantee ethical usage,” Ross said. “So, organizations must be very intentional about addressing both explicitly.”

Ajay Bhatia, global vice president and general manager of data compliance and governance at Veritas, urges companies to act now.

“Organizations shouldn’t wait to deploy these ethical AI strategies until it’s required by law,” Bhatia said. “Doing so now is good for business, good for customers and good for society as a whole.”

Scott Allendevaux, who runs a cybersecurity agency, says ethical frameworks for AI must balance the exciting promise the technology holds with the responsibility to protect human rights. He’s identified six attributes an ethical AI policy should have:

  • Transparency, meaning the AI system should disclose to humans when they are interacting with AI, and be able to explain their decisions including the logic behind algorithms.
  • Data protection, meaning data should not be collected and processed unless there is a lawful basis, and data must include strong data governance practices that align with laws.
  • Accountability, meaning there should be mechanisms in place to hold responsible AI systems and those overseeing operations.
  • Human control, meaning AI should remain under human control in a way that AI does not undermine human autonomy or dignity.
  • Fairness and non-discrimination, meaning the algorithms do not perpetuate biases resulting in unfair treatment based on factors such as gender, race, socioeconomic factors and similar factors.
  • Non-maleficence and beneficence, meaning the outcome must minimize harm and benefit people.

Gal Ringel, CEO and co-founder of Mine, a data privacy startup, warns of the risks involved in treating company data like a firehose and says people must still be involved in the process.

“The real challenge in instituting ethical AI universally is how to incorporate data minimization and primary use purpose principles into systems when even creators often struggle to explain and understand precisely how genAI works,” Ringel said. “ChatGPT and LLMs [large language models] are ravenous for data, but we cannot allow any genAI system to indiscriminately gobble up data as it trains. Pointed purposes for data collection, combined with relevant datasets combed through by various employees to minimize biases, is the baseline for ethical AI, and even that approach will likely run into complex, unforeseen problems as AI develops.”

And it’s not just AI that companies are developing or deploying themselves; in fact, in most cases, the technology comes from a third party. So ensuring companies are well-insulated from those risks will be paramount, says Jennifer Beidel, a member of Dykema’s government investigations and corporate compliance practice.

“Many regulators are announcing that companies will be held accountable for any misuse of their AI, even if that AI was designed by a vendor,” Beidel says. “Companies should be wary of blanket statements from vendors about lack of bias or illegality in their AI that seem too good to be true and should seek strong indemnification language in AI contracts.”

Dugan suggests that companies at minimum use data anonymization or, even better, ensure their AI services aren’t done via API.

“It’s very important to reiterate that if you are really using PII and you are sending it to an API model provider, I would highly advise against doing that,” Dugan said. “If it’s a local model, go for it. … Especially for larger organizations, just invest in one or two ML engineer people that can set up your own local server and you never have to worry about it again.”

Depending on how they already use AI, corporate leaders may consider pulling back or altogether unwinding some of their applications, though experts agree this would be a challenge and something to consider only as a last resort.

“Decisions should be based on the severity and potential harm of the applications in question,” said Bharath Thota, a partner in the Advanced Analytics practice of Kearney, a global strategy and management consulting firm. “It’s essential to establish comprehensive frameworks that address possible misuses, ensuring AI developments are tested rigorously in varied scenarios.” 

Ross says measures like role-based access controls and AI output audit requirements can help ensure human-enforced guardrails are in place, but companies can’t simply un-use AI.

“For businesses considering their AI use cases, it is important to remember that it’s very difficult to undo anything done with AI,” Ross said. “If an organization uses all of its proprietary data to train a large language model, the organization will not be able to untrain that LLM on sensitive proprietary data to limit data leakage. The guidance is to be cautious up front. AI has the potential to be a massive efficiency generator but comes with its own set of new risks and challenges.”


Source link

Leave A Reply

Your email address will not be published.