Corruption News

Throwing Bodies at AML Compliance Doesn’t Work, But Are You Asking the Right Questions About AI?

0

As financial institutions turn to AI in an effort to mitigate their ongoing worker shortage, Art Mueller of WorkFusion has a few questions to help ensure they’re integrating this technology in the right way.

Editor’s note: The author of this article, Art Mueller, is vice president of financial crime for WorkFusion, an AI staffing firm in financial services.

The risks for money laundering and terrorist financing have never been greater. The U.S. Department of the Treasury recently released its 2024 National Money Laundering Risk Assessment, which found that financial criminals “are adapting to maximize profit from their criminal activities, including those related to check fraud, unlawful campaign finance, tax crime and Russian money laundering.”

If the bad actors are innovating on their old schemes, isn’t it time for compliance organizations within banks and financial institutions to do the same?

The hard truth is organizations are struggling to find candidates to overcome inadequate employee headcounts and employee departures. Employee shortages have become business as usual in financial crime compliance, especially for Level One (L1) teams where heavy workloads and repetitive and routine processes can lead to employee burnout. According to research from Celent, 70% of banks and non-banking financial institutions face capacity challenges in their compliance operations — meaning that many departments that are “staffed adequately” face at least occasional capacity shortfalls.

And then, once you find someone for an open role, you must onboard and train new analysts and, in many cases, re-train, which can take months. That then comes with its own set of risks like backlogs, overworked staff, missed escalations and possible remediation efforts. This hiring challenge creates more work for an already overworked staff, which causes more people to leave because they feel like they’re on a constantly moving treadmill.

Simply put, you can’t throw bodies at this anti-money laundering (AML) compliance problem. Hiring more people — outsourcing, offshoring or temporary workers — will not fix these foundational challenges. A more holistic approach and solution is needed, allowing financial institutions to manage risk more effectively, improve quality and get out of the same recruit-onboard-train-leave cycle that has been 20 years in the making.

Fortunately, artificial intelligence (AI) and automation tools are easily accessible. And now more than ever, regulators want financial institutions to innovate and look for new ways to perform old processes. The AML Act of 2020 made this abundantly clear, especially coupled with FinCEN’s innovation initiative and the release of the Wolfsberg Group’s principles on AI and machine learning (ML). 

Scalability

Banks frequently face a variety of factors that lead to shortfalls in compliance programs’ ability to meet workload demand — from staffing challenges to navigating black swan global events.

For example, when Russia invaded Ukraine, compliance teams were caught off-guard by the near-instantaneous explosion in the number of sanctioned persons. It was nearly impossible to scale. Recruiting and then training takes too long. When global events occur, outsourcers face a surge in demand. Beyond making their prices skyrocket, they are also forced to recruit less-qualified people to perform the work. So, the quality of work declines, and bank customers fail to gain the confidence they need to face auditors and regulators when the time comes to report results.  

Additionally, many organizations look to their senior staff to fill capacity gaps. According to the Celent report, 38% of firms call on more senior personnel to pitch in. Smaller organizations are more likely to call on their compliance officers to help — 44% of organizations with assets under U.S. $50 billion rely on senior personnel to close gaps.

By contrast, when leveraging AI, volumes simply do not matter. The technology can scale up and down based on demand — and attrition and volume spikes have almost zero impact on operations.

Quality

The combination of compliance and operational demands requires organizations to do more (e.g., review more alerts and provide more descriptive narratives) with less (the same or fewer number of employees to review alerts). This double whammy puts tremendous pressure on every banking and financial services company and its staff. Beyond the overwhelming volume of alerts, consistency and quality are also problems.

First is the consistency and quality of approach, which then leads to the consistency and quality of the work that is being done. This leads to technical errors but also sometimes substantive errors and missed escalations, especially when dealing with a large number of false positives.

I often use the visual of the drinking bird toy hitting the enter key. Most banks have a 99% rate of false positives for sanctions. Whether it’s payment or referential data screening, it’s very high. The analyst becomes conditioned on the false positive. Everything is a false positive, leaving you vulnerable to that rare true positive.

When you use AI to remove a lot of the false positive noise and the rote, mundane and redundant work, you can then have a more effective program because now your analysts are focused on risk. They’re focused on risk management, risk mitigation and analysis. You now have a consistent approach tied to the models within AI and machine learning, as opposed to every single analyst doing their own interpretation of the procedure. And human error is mitigated. AI won’t miss things because it’s tied to a model; it’s not tied to a person who needs to put it into the right file or accidentally mislabels it.

Time to decisions

Customer experience is critical in banking. Customers demand access to their money when they want it. Whether it’s real-time payments or wire transfers, speed of access to funds while ensuring compliance is essential. But what happens when a payment gets stuck in a filtering tool because of a hit? An analyst might not look at that for three hours because it’s in their queue to review, so payment is delayed for three hours. That’s unacceptable for a customer that needs to make payroll, is waiting on a loan, etc.

AI will review those hits in real-time, ensuring happy customers.

From a transaction monitoring perspective, AI can help you avoid the 30-day scramble. This is an all-too-common scenario: It is Day 29 of 30, and several of your transaction monitoring L1 analysts still have 25 to 50 alerts in their review queue. People are calling and saying, “Hey, it’s Hour 23 of Day 29. What’s the status?” So now, you must act. But how? Do you throw caution to the wind and push your analysts to fly through their reviews, risking poor quality scores and missed escalations? 

With AI and ML taking a first pass, scoring it and providing a recommendation, your team avoids the 30-day scramble, and analysts can spend their time on investigations rather than busy work turning them from authors to editors. An added benefit is getting to a suspicious activity report (SAR) filing faster.

Getting started

For many organizations, the typical way of approaching AML compliance is by throwing bodies at the problem, but that’s no longer an effective strategy. By leveraging AI and automation, you can build consistency in the analytical function of your team — the risk management/risk mitigation function — instead of having analysts focus on false positives and the mundane, rote busywork. Instead, you’re elevating them to be risk analysts and freeing them up to address parts of the financial crime compliance program that might need help.

However, while AI is a useful tool to augment your AML compliance program, it isn’t magic. There are serious challenges and pitfalls to avoid like navigating your organization’s model risk management (MRM) framework. Many AI solutions are black-box models as opposed to an open platform where you can see what’s under the hood as to how the AI made its decisions. You need to have explainability within the model to ensure that AI is not missing something or making a wrong decision.

The first step before diving into AI is to develop a business case that asks hard questions, including:

  • Do I have the right use case?
  • What problem am I solving?
  • How do I address it with AI?
  • How am I proving it?
  • Do I have buy-in from senior managers and regulators?
  • How do I navigate the MRM framework? Does the MRM framework account for generative AI?
  • What are my ROI expectations?

Once those questions are answered, start small. Find one use case that will provide a fast return in value and one that can be deployed quickly. Then, get familiar with how you would approach AI and ML within your program. Look at how your MRM will approach it and how the regulators will approach it with your financial institution. You can address this by testing, documenting and going through your MRM framework to make sure the model has gone through the scrutiny needed to be deployed correctly. Start there and then build from that. Small wins will build up and create the transformation needed for a modern AML compliance program.


Source link

Leave A Reply

Your email address will not be published.