With cross-border transactions projected to reach USD $290 trillion by 2030, the global financial system is undergoing a seismic transformation. At the center of this shift is artificial intelligence (AI), a force that is both revolutionizing efficiency and reshaping the risk landscape in unprecedented ways.
The third and final part of Convera’s Payments Pulse report examines this dual reality. On one hand, AI is fueling industrialized fraud and cybercrime. On the other hand, it’s powering the most advanced defenses seen against threat actors to date.
For organizations engaged in international payments, opportunity and risk have never been more tightly intertwined.
Download all three parts of The Payments Pulse now
Emerging AI security threats
The digital proliferation triggered by the COVID pandemic created fertile ground for innovation. Unfortunately, it also unleashed a surge in cybercrime. According to Nasdaq, the financial industry lost $485 billion to fraud in 2023, while $3.1 trillion in illicit funds moved through the global system.
Today, more than half of fraud involves AI, marking a staggering shift that has effectively enabled crime at scale. Criminals now use generative AI to craft highly convincing scams, automate the creation of synthetic identities, and even deploy deepfakes that can fool employees into transferring millions.
On a recent episode of Convera’s Converge podcast, Sammy Chowdhury, co-founder of Prescient Security, emphasized the need to train employees on the latest security technology and protocols, especially as phishing scams become more sophisticated and weaponized by AI.
“Phishing is important. A phishing test (for employees) followed by training for those who are failing — that would be super important in any organization,” Chowdhury says. “That’s how the bad guys are coming in half the time.”
Key types of AI-driven financial scams
In this rapidly evolving environment, which AI-driven fraud threats are proving the most dangerous?
- Deepfakes: Cases like the $25 million Hong Kong heist highlight how realistic video conferencing scams can manipulate trust in real time.
- Voice cloning: Nearly 28% of UK adults were targeted by voice-based scams in 2024, yet nearly half were unaware the threat existed.
- Synthetic identities: AI-enabled persona creation surged nearly 200% between 2024 and 2025, with fake documents and biometrics able to bypass legacy Know Your Customer (KYC) and Anti-Money Laundering (AML) checks.
In all these cases, it’s AI that makes modern cybercriminals so successful. The threat is no longer lone individuals wreaking havoc; it’s organizations that leverage AI to automate and scale attacks, create highly convincing scams, and defraud businesses with unprecedented precision.
“Cybercrime organizations operate like any other modern enterprise, complete with management hierarchies, budgets, and efficiency targets,” says Sara Madden, VP, Chief Information Security Officer at Convera. “They use the same productivity tools, cloud platforms, and AI-driven automations that legitimate businesses rely on, but their end goal is completely nefarious. It shows that innovation can be a positive or negative force depending on who holds the reins.”
Current trends for AI in risk management
Cross-border payment fraud impacts both businesses and consumers. While cross-border transactions represent only 11% of total card transactions, they account for 71% of card payment fraud by value.
Legacy fraud and compliance systems rely on static rules that can flag transactions above a certain threshold or block accounts with suspicious activity. With criminals leveraging AI to develop more sophisticated strategies, traditional, manually driven countermeasures are no longer viable or current. Let’s examine some of the key challenges that traditional approaches to fraud and AML management face.
Top limitations of legacy fraud and compliance systems
- False positives: Over 95% of AML alerts are false alarms, wasting analyst time and frustrating customers.
- Manual bottlenecks: Human reviews are slow, costly, and prone to error.
- AI detection gap: Traditional systems struggle to identify synthetic identities or AI-generated content reliably.
Fortunately, AI is enhancing the defense side of the equation. Now, 90% of financial institutions use AI for fraud detection, and the global market for AI-driven fraud prevention is projected to grow from $15.6 billion in 2025 to nearly $120 billion by 2034.
Key AI innovations in risk management
- Federated learning: Banks collaborate to improve fraud detection without sharing raw customer data. SWIFT and Google Cloud’s initiative with 12 global banks in 2025 is a leading example.
- Precision monitoring: AI can reduce false positives by up to 40%, freeing compliance teams to focus on real threats.
- Real-time detection: Machine learning models analyze international transactions instantly, and are capable of identifying layering, structuring, and other laundering patterns as they happen.
- Advanced modeling: Advanced AI techniques like Temporal Graph Neural Networks (TGNNs) achieve fraud detection accuracies above 99%, identifying complex fraud schemes across borders.
- Explainable AI (XAI): Tools like LIME and SHAP can add transparency to AI decisions, satisfying regulators while maintaining trust.
Additionally, the increasing use of digital currencies, such as stablecoins, offers businesses even more protection by integrating AI-powered fraud detection into the blockchain to monitor transactions in real time, detect mule activity and anomalous behaviors, and authenticate identities using biometrics and behavioral data.
A responsible AI approach to risk management
The future of cross-border payments will be defined by those who master AI not just for speed and scale, but for security and trust. Fraudsters have already industrialized their tactics. Financial institutions and businesses must do the same with their defenses.
“At Convera, we build AI that’s ethical, explainable, and aligned with global standards,” says Dharmesh Syal, Convera’s Chief Technology Officer. “Our responsible AI framework combines interpretable models with advanced techniques such as XGBoost and Random Forest to ensure fairness, transparency, and human oversight, especially in fraud detection. This layered approach, combined with rigorous validation, means we’re reducing false positives while building trust in every cross-border transaction.”
In a breakthrough for efficiency, Convera’s machine learning models are projected to reduce false positives by up to 50% in compliance payment screening.
In other words, those who invest in responsible AI-driven risk management will not only reduce losses but also build the trust needed to thrive in a digitized, borderless economy.
Download all three parts of The Payments Pulse to learn about the changing state of the cross-border payments ecosystem in its quest for standardization and efficiency.