Money laundering remains a pervasive global issue that is only becoming more challenging with the digitization of finance across the world. Criminal masterminds are becoming even more sophisticated, using Artificial Intelligence (“AI”) to perpetrate massive frauds and schemes, ranging from synthetic identity fraud (AI-generated passports for identity verification) to deepfakes (AI-generated voice impersonation) to obscuring and disguising transactions to mimic legitimate purposes. To combat a more challenging landscape, the financial sector stands at the precipice of a technological revolution, where AI is not only reshaping the types and modes of fraud and money laundering, but also reshaping traditional processes, including those related to Anti-Money Laundering (“AML”) compliance. In this increasingly digital environment, financial institutions are beginning to explore and turn to AI to enhance the efficiency and effectiveness of their AML programs. However, this transition is not without its hurdles. The integration of AI in AML compliance presents a host of opportunities and challenges that institutions must navigate carefully to ensure regulatory compliance and ethical governance.
Definition of Artificial Intelligence and Current Regulatory Landscape
Like many other terms of art in the digital finance ecosystem, “artificial intelligence” does not have one single definition used consistently. “Artificial intelligence” encompasses a wide range of concepts and technologies, and its definition can vary depending on the context — technical, legal, academic, or practical. One common definition used by the National Institute for Standards in Technology (“NIST”) defines “artificial intelligence” as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”[1]
As to the use of AI in the financial services industry, within the US regulatory landscape, several government agencies have adopted or proposed frameworks for AI; however, there is no federal comprehensive framework. In the AML world, in June 2024, FinCEN issued a proposed rule to strengthen and modernize AML/CFT programs.[2] One of the AML Act’s purposes is to “encourage financial institutions to modernize their AML/CFT programs where appropriate to responsibly innovate, while still managing illicit finance risks.”[3] While the rule is still in proposed status, it includes reference to the adoption of emerging technologies, such as “machine learning or artificial intelligence, that can allow for greater precision in assessing customer risk, improving efficiency of automated transaction monitoring systems by reducing false positives, or reducing overall costs and improving commercial viability with certain customer types and jurisdictions.”[4] The comment period ended in September 2024. Numerous comments were submitted, ranging from the proposed rule that does not indicate what is contemplated by the term “innovative” to is it permissive or mandatory to pointing out the high costs involved and the unproven nature of these technologies. One comment even went so far as to question the necessity of the proposed rule, saying financial institutions should be free to meet their compliance obligations in whatever manner meets the requirements of the rule rather than forcing financial institutions to take on the expense and diversion of resources to AI as opposed resources from the primary focus of the AML compliance program.
Among the most common impediments related to the current offerings AML compliance tools and methodology are related to speed, use of data, accuracy of results, finding false positives, and early detection — not to mention the regulatory buy-in for implementing AI programs in lieu of human review. Additionally, some of the best tools can be prohibitively expensive for some financial institutions, especially for small- and medium-sized firms. With that being said, some of the most common uses of AI financial services firms currently employ in their AML systems include: (1) government and sanctions-list screenings; (2) adverse media screenings; (3) using automated systems with bots that prepare write-ups and clear false positives automatically; (4) alert monitoring; (5) automated-riskrating models; (6) automated workflows; and (7) onboarding solutions to verify customers’ identities. Enter artificial intelligence.
Criminals Use AI for Nefarious Purposes and Increasingly Sophisticated Schemes
On the flipside of using AI in AML compliance, one of the most popular uses of AI by bad actors is “deepfakes” to commit fraud — ranging from celebrity impersonation to creating voice clones to bypass voicebased authentication methods to even as far creating a fake video meeting impersonating high-level executives that resulted in a $25 million scam.5 The technological capabilities of AI allow for creation of content that is increasingly difficult to distinguish realistic deepfakes to what appear to be real events and real people.
Deepfakes can manufacture what appear to be real events, such as a person doing or saying something they did not actually do or say either through audio or video, or both. There is also an increasing use of new account fraud and account takeovers where criminals create synthetic identification such as an AI-generated photo of an individual holding an AI-generated ID to open new accounts or takeover a customer’s existing account and then steal funds, launder funds, or engage in manipulative trading to bypass biometric-verification checks. Criminals have also programmed sophisticated bots to rapidly attempt to open multiple fraudulent accounts. AI has upped the game in more traditional scams and schemes in business e-mail compromises, ransomware attacks, imposter scams, investment club scams, new account fraud, account takeovers, and market manipulation.[6]
Opportunities: Transforming AML Compliance Through AI
Artificial intelligence offers a number of opportunities and positive-use cases that can help address shortcomings in the current AML compliance environment. For example, AI has the ability to review and summarize unstructured data, such as information from different sources on the same or related topics, in a more efficient and usable way. AI can act as a “filter” on human output by checking for potential errors or biases, and can be leveraged to identify fraud. One such use is combatting check fraud, which has become more prevalent in the banking industry over the last several years. Another is transaction-monitoring, in the brokerdealer space. AI use of data and machine learning, coupled with its speed, can dramatically reduce incidents of fraud and catch it much faster, theoretically without the need for human resource capital.[7]
Additional benefits include enhancing detection capabilities given AI’s ability to analyze large volumes of data, across multiple systems, to identify complex patterns and suspicious activities that would be challenging or impossible with traditional methods or human review. AI improves efficiencies and costs (postimplementation costs) given automation, allowing financial institutions to reallocate human capital resources more effectively. AI also has the ability to reduce false positives ever present in traditional monitoring systems, which are extraordinarily time and resource-intensive.
Advanced Analytics
AI technologies, particularly machine learning (“ML”), offer advanced analytical capabilities that far surpass traditional rule-based systems. ML models can sift through vast datasets to detect patterns and anomalies indicative of suspicious activities. These systems can continuously learn and adapt based on new data, allowing for more dynamic and responsive monitoring. For example, AI can analyze customer behavior over time and flag deviations that may signal money laundering, such as sudden increases in transaction volume or unusual geographic patterns. These capabilities help institutions identify threats that would otherwise go unnoticed using traditionally static rules and thresholds. This advanced analysis improves the effectiveness of AML efforts and helps in the early detection and prevention of financial crimes.
Moreover, by automating the analysis process, AI reduces the manual (human) effort required for risk assessment. This automation improves efficiency, saves time, and reduces costs associated with manual reviews and investigations. Financial institutions can allocate their resources more effectively, focusing on higher-risk cases that require human expertise.
Limiting False Positives
A longstanding issue in AML compliance is the high volume of false positives generated by traditional systems. False positives stretch compliance teams and damage the effectiveness of AML investigations. However, AI can significantly reduce false positives by incorporating contextual data and learning from historical outcomes and data. Specialized techniques like natural language processing (“NLP”) and predictive analytics allow AI to prioritize alerts based on various risk levels and predict the probability of true positive outcomes. This targeted approach allows compliance officers to allocate their resources more effectively and address actual risks more effectively.
Monitoring Transactions in Real-Time
AI enables real-time surveillance of transactions, a critical capability in combating increasingly sophisticated money laundering techniques. Traditional compliance systems may rely on batch processing, which can delay the detection of illicit activity; however, AI-powered platforms can analyze transactions as they occur and trigger immediate alerts for further investigation. The speed of AI cannot be overstated here. This capability not only improves the timeliness of responses but also enhances the institution’s ability to prevent the movement of illicit funds before they exit the financial system or are moved through financial institutions.
Efficiency and Cost Savings
AI-driven automation can streamline labor-intensive AML compliance processes such as Know Your Customer (“KYC”), customer identification, customer due diligence, and economic sanctions screening. Robotic-process automation can handle repetitive tasks like data collection and validation, while AI is able to manage more complex functions such as risk scoring and case prioritization. This integration of AI has the potential to materially reduce the burden on AML compliance teams, hasten processing times, and decrease operational costs, all while maintaining or improving compliance outcomes.
Scalability and Adaptability
Finally, as financial institutions grow and evolve, so too does the creativity of criminal actors and the complexity and volume of transactions financial institutions must monitor. AI systems can be developed to enhance the scalability needed to review and analyze larger data sets without sacrificing performance. AI models can also be retrained or fine-tuned much faster than current systems or compliance staff in order to accommodate changes in regulatory requirements, emerging money laundering typologies, and the increased velocity of criminal elements, ensuring continued effectiveness.
Challenges: Navigating the Complexities of AI in AML
While the possible use cases and opportunities are compelling, there are a number of material risks that financial institutions need to consider when looking at adopting AI, generally, and specifically, for compliance purposes. The level of scrutiny of each of these risks will naturally depend on the type, size, and complexity of the financial institution and the financial products, services, and activities in which such financial institution engages.
Regulatory Expectations
One of the most significant barriers to the widespread adoption of AI in AML compliance is the lack of transparency in how AI models make decisions. Many advanced AI models, particularly deep-learning systems, seemingly operate as “black boxes,” making it difficult to explain their decisions and outputs. This is especially important given that the regulatory community needs financial institutions to justify and explain compliance decisions, especially when it comes to the filing (or nonfiling) of Suspicious Activity Reports (“SARs”) or the investigating of (or not investiging) red flags in transaction monitoring in AML compliance. As such, there is a growing emphasis on “explainable AI,” which aims to make AI decisions more transparent and understandable, but there may be a long way to go.
To make this challenge even greater, the regulations implementing anti-money laundering statutes have traditionally been technology-neutral, which creates ambiguity around the use of AI. Many jurisdictions, including the U.S., have not yet provided actionable guidance on how AI should be validated, audited, or governed in the context of compliance. Unfortunately, this regulatory uncertainty may deter institutions from investing in AI solutions, particularly given the potential for enforcement action if systems are deemed noncompliant. At least some regulators have asked for input on modernizing rules. For example, in April 2025, FINRA published Regulatory Notice 25-7, requesting comment on “Modernizing FINRA Rules.” FINRA acknowledges that criminals are employing increasingly sophisticated tactics using technology and AI, making it increasingly difficult for financial services firms to identify the ever-changing landscape of scams and schemes. The comment period was initially set to expire on June 13, 2025, but was extended to July 14, 2025.[8] The Financial Crimes Enforcement Network also proposed rules “to strengthen and modernize financial institutions’ anti-money laundering and countering the financing of terrorism (“AML/CFT”) programs.[9] The proposed rule would amend FinCEN regulations “to explicitly require that such programs be effective, riskbased, and reasonably designed, enabling financial institutions to focus their resources and attention in a manner consistent with their risk profiles.”[10]
Data Quality and Model Governance
Since AI models are only as good as the data they are trained on, if the data contains historical biases or lacks diversity, the resulting models may perpetuate or even amplify discriminatory outcomes. For instance, if an AI system is trained predominantly on data from specific geographic regions or customer segments, it may unfairly target certain groups for enhanced scrutiny. To mitigate these risks, institutions should implement fairness testing, maintain diverse and representative datasets, and establish oversight mechanisms to detect and address bias in model outputs. However, this is easier said than done, especially in this environment of so much regulatory change.
Another significant challenge relates to data quality and model governance. As mentioned above, AI models rely on the data on which they are trained; so effective AI models require high-quality, structured, and comprehensive data. Many financial institutions struggle with fragmented data architectures, legacy systems, and inconsistent data standards. These data input issues can hinder model performance and reduce the reliability of AI-generated decisions. Investments in data governance, integration platforms, and data cleansing processes are essential prerequisites for successful AI implementation, again increasing the cost to financial institutions.
Financial institutions should also ensure that data privacy and security standards are upheld, particularly when handling sensitive customer information, and particularly when leveraging third parties to provide the technology and support for compliance. Therefore, AI models, like any other financial tools, must be subject to rigorous governance and standards review throughout their lifecycle and throughout the third-party relationship. This includes development, validation, deployment, monitoring, and eventual retirement. Financial institution compliance staff should document all aspects of model design and decision-making logic to facilitate internal and external audits and regulatory reviews. Moreover, AI models must be regularly tested and recalibrated to account for changes in customer behavior, emerging threats, and regulatory updates. Establishing dedicated model risk management frameworks and/or dedicated third-party risk management frameworks when using vendors is critical to ensuring accountability and oversight.
Data Security
Another challenge with the use of AI is ensuring data security. Financial institutions need to protect stakeholders (including firms, consumers, and end users) from data breaches and data manipulation. AI protocols and systems, as well as third-party relationships providing such tools, can introduce new vulnerabilities that malicious actors can exploit. For example, adversarial attacks can manipulate data inputs to AI models to evade detection or generate misleading outputs. Additionally, the aggregation and analysis of large datasets increase the risk of data breaches and cyber intrusions. Institutions must integrate cybersecurity considerations into their AI strategies and adopt advanced defenses, including anomaly detection and threat intelligence systems, to safeguard their AI infrastructure.
Striking the Right Balance
In order to maximize the benefits of AI while minimizing its risks, financial institutions should develop a comprehensive approach relying on sound governance, transparency, and collaboration. Financial institutions may need to start with the basics. First, gain a better understanding of what AI is and how it is already transforming finance. Then, assemble the right talent to help guide the institution, whether internal, external, or both, through the appropriate AI riskidentification exercises and mitigation options that AI tools may be able to offer. This includes review of cybersecurity expectations and the constantly evolving regulatory expectations around the use of AI and involving key stakeholders across the firm — IT, risk, compliance, legal, and firm leadership. Finally, financial institutions will need to assess what level of AI adoption makes sense for them based on their size, complexity, and risk profile. Some best practices to consider:[11]
- Establish a cross-functional committee to oversee AI governance.
- Track regulatory developments in key jurisdictions.
- Identify high-risk applications and create mitigation plans.
- Conduct cybersecurity due diligence on vendors.
- Obtain and maintain client consent for data use.
- Review contracts to clarify data use and limits.
- Train staff on cybersecurity and fraud mitigation.
- Train employees to recognize deepfakes and AI-driven scams.
Conclusion
AI has the potential to revolutionize AML compliance by enhancing detection, improving efficiency, and adapting to evolving threats. However, its integration must be managed with care, ensuring transparency, fairness, and regulatory alignment. Financial institutions that successfully harness AI while addressing its inherent challenges will be better equipped to combat financial crime and safeguard the integrity of the global financial system. By embracing a forward-looking, responsible innovation strategy, financial institutions can satisfy current compliance demands and also position themselves for future resilience and competitiveness in an increasingly digital financial world.
References
- NIST AI Risk Management Framework 1.0, available at https://www.nist.gov/itl/ai-risk-management-framework.
- FinCEN Proposed Rule to Strengthen and Modernize Financial Institutions’ AML/CFT Programs, June 28, 2024.
- Id.
- Anti-Money Laundering and Countering the Financing of Terrorism Programs, Proposed Rule by the Financial Crimes Enforcement Network, July 3, 2024.
- Finance worker pays out $25 million after video call with deepfake ‘chief financial officer,’ found at https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hongkong- intl-hnk.
- 2025 FINRA Annual Regulatory Oversight Report, January 28, 2025; Customer Advisory: Criminals Increasing Use of Generative AI to Commit Fraud.
- Speech: Artificial Intelligence in the Financial System, Governor Michelle W. Bowman at the 27th Annual Symposium on Building the Financial System of the 21st Century: An Agenda for Japan and the United States (Washington, D.C.), November 22, 2024.
- Https://www.finra.org/rules-guidance/notices/25-07.
- Https://www.fincen.gov/news/news-releases/fincen-issuesproposed- rule-strengthen-and-modernize-financial-institutions.
- Id.
- Guidelines on the Use of Generative AI Tools by Professionals from the American Bar Association; DOJ Updates Guidance on Corporate Compliance Programs to Include AI Risk Management; FINRA Issues Brief Reminder on Managing Generative AI Risk in Supervisory Tools.
Reprinted by permission


