Corruption News

A Strong Foundation: Taking Responsible AI Implementation From Theory to Practice

0

If your company isn’t already using AI, it almost certainly will be in the near future. But it’s not like flipping a switch, and doing it wrong could expose your organization to extreme risk, especially in the heavily regulated finance sector. Katie Twomey of Illumen Capital shares a comprehensive guide to getting started with AI.

In a February speech, SEC Chairman Gary Gensler lauded AI’s potential in the finance sector but warned of its risks, saying, “AI is about using math, data and computational power to find patterns and make predictions. It opens up tremendous opportunities for humanity. As machines take on pattern recognition, particularly when done at scale, this can create great efficiencies across the economy. In finance, there are potential benefits of greater financial inclusion and enhanced user experience. … AI also raises a host of issues that aren’t new but are accentuated by it.”

There is no doubt that AI is here — and that it is here to stay. Nearly 75% of companies are already using and/or providing AI through their product offerings, and extreme growth is projected through 2030. While using AI sounds daunting for organizations that haven’t begun, future usage is inevitable. Now is the time to lay the foundation for compliant and ethical AI governance.

Where do I start?

To determine how to leverage AI compliantly and ethically within your company, it is important to first assess if you are even using AI, how you will be using it, what the purpose of using it is and where you will access it.

What are the signs that a tool may be AI-enabled?

  • Does it make recommendations, predictions or provide analysis?
  • Does it use data to do something for you?
  • Does it use the words “personalized,” “adaptive,” “tailored” in the marketing of the product?

Who will use it?

  • Am I using an AI model directly (e.g., ChatGPT)?
  • Is an application I use leveraging AI (e.g., Zoom’s AI companion)?
  • Are my vendors using AI?

How will you use it?

  • Am I using an existing AI model (e.g., an AI-enabled recruiting platform)?
  • Am I developing a new AI model?
  • Am I processing big data?

What is the purpose?

  • Am I using an AI model for internal operational tasks (e.g., Ramp’s suggested expense coding capabilities)
  • Am I using an AI model for client-facing work (e.g., Beautiful.ai)?
  • Am I incorporating an AI model into my product or offering?

Where is the tool?

  • How am I accessing the tool (i.e., website, API, cloud)?
  • Is the tool private or public?

For the purposes of this article, let’s focus on scenarios in which the company will be leveraging an existing AI model to support internal and external activities.

Risks associated with using AI models

As previously alluded to, the SEC acknowledges that adopting AI can bring benefits to both firms and investors, but at the same time, can pose new and unique risks, raising “the potential for conflicts of interest associated with the use of these technologies to cause harm to investors more broadly than before.” Therefore, it is important to revisit compliance requirements through the lens of AI adoption to ensure these new activities are compliant with existing (and ever-growing) regulations.

Confidentiality & data privacy

AI models generate outputs based on the inputs it has received. An AI model is only as powerful as the information provided, relying on vast amounts of data to strengthen outcomes. With an opportunity to create unparalleled efficiencies, people have been eager to adopt AI models across all facets of life. The problem arises when receiving an output from an AI model requires one to leverage confidential (i.e., private, personal, sensitive, proprietary) information to receive a useful end product. Regardless of the outcome, depending on the AI model used, the information shared may now be used to train the AI model and provided in one way or another to a future user, just one concern that led Samsung to ban employee use of ChatGPT.

Conversely, when one leverages an AI model, the output received could be confidential or proprietary information that someone previously shared, raising another host of problems now that you are in possession of potentially sensitive data. Further, if you are looking to produce copyrighted and/or proprietary firm content, the material leveraged from an AI model could be unprotected by copyright as it could be a derivative of formerly copyrighted materials. This last point has been particularly newsworthy with the New York Times recently suing OpenAI and Microsoft for copyright infringement.

Quality

Major AI platforms themselves admit that their services sometimes provide inaccurate results, something both OpenAI and Google have both acknowledged about their respective generative AI products. AI models are trained through data inputs, and those data inputs can be outdated, incorrect, misleading and even biased or offensive.

Several issues can arise with compromised quality. For starters, AI model outputs incorporated into marketing materials could inadvertently mislead investors and violate the SEC marketing rule if claims cannot be substantiated. Additionally, if you choose to leverage an inaccurate AI model output to conduct work from, this could mislead activities, leading to below-standard work product and reputation risk. Lastly, AI models can make investment recommendations that do not consider an adviser’s fiduciary duty to its clients and/or all relevant suitability factors of an investor.

Ethical considerations

In a previous article about reducing bias in the hiring process, I shared that bias persists at every layer of asset management. I later shared why, from a compliance and risk standpoint (other than because it’s just the right thing to do) reducing bias in the hiring process (and in a later article, the vendor selection process) should be a top priority. In parallel, a study conducted by Stanford SPARQ found that asset allocators have trouble gauging the competence of racially diverse teams, affecting one’s ability to make the most optimal investment decisions.

While one of the antidotes to bias is slowing down and creating friction throughout processes, the financial services industry is implementing AI models to increase efficiency and accelerate the speed in which we do our work. A “data set’s bias might itself merely be a reflection of larger systemic biases,” Vox reported, and inputting already biased information into AI models magnifies the biases and perpetuates the harm they can create. It also heightens the legal, regulatory and performance risk that comes with working with unchecked biases.

While widespread awareness of bias in AI is far from new, with the rise of AI adoption, there have been an increasing number of scenarios in which bias in AI has steered us wrong. Two examples include Rite Aid’s racial and gender-biased AI facial recognition technology and Uber’s racially biased facial recognition checks. Additionally, studies have cropped up about ChatGPT, for example, one revealing a racial bias in resume sorting and another showing a gender bias in answering prompts related to nurses.

Where do we go from here?

Implement a firmwide AI policy

Inform your employees of the risks and guidelines associated with using AI models. Include any already approved AI models and specific permitted or prohibited activities as it relates to each model. Consistently train all employees on your AI policy.

Conduct due diligence on AI models prior to use

For technologies you already use, as well as future ones, at a minimum, learn the answers to the following questions: Will this AI model learn from or train on my data? Can I opt out of this setting? Do I own and control my data? How do I check outputs for potential biases?

Additionally, to prevent conflicts of interest and a breach of your fiduciary duty, an adviser should not rely solely on AI to provide investment advice. If the use of an AI model is approved to support investment recommendations, it is imperative that the firm gain a deep understanding of the tools’ decision-making processes to ensure outputs are transparent and explainable.

Set clear guidelines around prohibited activities

Misuse of confidential information can have major consequences. Before inputting any information into an AI model, consider the data privacy protections you may have in place.

  • What obligations do I have to stakeholders under privacy policies, privacy notices, confidentiality agreements, etc.?
  • If necessary, have I disclosed my proposed usage and obtained consent from relevant parties?
  • Which authorities are we regulated by, and are there any restrictions when it comes to handling data?

Further, conduct a review of your service providers and wider supply chain. Are any of them using your data in AI models? How can you apply your AI policy to those outside of the walls of your firm?

If you have decided to share information with an AI model, the firm should also determine if there are any types of information prohibited from use. For example, could an AI model be capable of disaggregating and/or identifying information previously inputted as an anonymized dataset? Could sharing certain information with an AI model be equivalent to sharing your value proposition or your value-add with the wider industry?

Confirm validity of outputs

Human review of AI model outputs is essential. Identify reputable sources to confirm that information is truthful, accurate and can be substantiated. Do not use responses that cannot be validated.

Books & records requirements

What are your books and records requirements and does the AI model allow for maintaining accurate and complete records? Which activities need to be recorded and how do employees submit outputs in compliance with requirements?

 




Source link

Leave A Reply

Your email address will not be published.