FIS, a financial technology company, has teamed up with Anthropic to launch the Financial Crimes AI Agent. The tool combines Anthropic's Claude reasoning model with FIS's banking data and regulatory infrastructure. BMO and Amalgamated Bank are among the first institutions to pilot the agent, which aims to cut through the manual labor that dominates anti-money-laundering operations.
Why banks need a smarter AML tool
US financial institutions spend between $35 billion and $40 billion every year on anti-money-laundering efforts. The United Nations estimates roughly $2 trillion in illicit funds move through the global financial system annually. Investigators currently spend most of their time pulling evidence from disconnected systems before they can even begin analysis. Emerging US regulation is pushing banks to shift resources toward the highest-risk threats, making efficiency a priority.
On April 8, the Treasury proposed rules that would treat permitted payment stablecoin issuers as financial institutions under the Bank Secrecy Act. That would require those issuers to implement AML programs and file suspicious activity reports. The new agent is designed to help banks handle that kind of regulatory load.
How the agent works
The Financial Crimes AI Agent pulls evidence from a bank's core systems. It evaluates activity against known money-laundering typologies and surfaces the highest-risk cases for investigator review. Human investigators keep final sign-off on every decision the agent makes. All client data stays inside FIS-managed systems, and every decision the agent takes is auditable.
The idea is to let machines handle the grunt work—gathering data, checking patterns, flagging anomalies—while humans focus on the nuanced judgment calls. That division of labor could shrink the time from suspicious-activity detection to investigation.
First pilots and future plans
BMO and Amalgamated Bank are the initial testers. General availability of the agent is planned for the second half of 2026. FIS has also outlined a roadmap that extends beyond anti-money-laundering: future versions could make decisions on credit, deposit retention, customer onboarding, and fraud detection.
The choice of Anthropic's Claude model matters. Claude is known for its safety features and explainability, which align with banking's need for auditable AI. By keeping data inside FIS's infrastructure, the system addresses the privacy and compliance concerns that often keep banks from adopting AI more aggressively.
The Treasury's stablecoin rule proposal adds another layer of urgency. If finalized, it would bring a new class of financial institutions under the Bank Secrecy Act, expanding the scope of AML work. The FIS-Anthropic agent could become a tool those new issuers use to meet their obligations.
For now, the question is how quickly banks will adopt an AI that takes over evidence-gathering but leaves the final call to humans. The 2026 release date gives FIS time to refine the agent as pilot results come in.




