Combatting financial fraud has emerged as one of the most promising applications of artificial intelligence (AI) and machine learning (ML). These systems are capable of detecting suspicious patterns of account behavior in real time and stopping data breaches and fraud in their tracks, long before money is spirited from accounts and banks’ reputations are tarnished.
This does not mean that banks’ age-old fight against fraud and theft have been resolved, however — far from it. Just as banks’ defenses have evolved with the digital age, so too have those of fraudsters — and this includes the deployment of AI for their own nefarious purposes.
The Financial Fraud Prevention Playbook, a PYMNTS and Featurespace collaboration, seeks to offer banks a roadmap to navigate the evolving risks they face. The playbook features the latest research on the sophisticated fraud threats facing financial institutions (FIs) as commerce and banking become increasingly digital and global. It also explores how banks can face these challenges head-on and offers the perspectives of leading FIs that are on the digital security frontlines.
FIs face a familiar challenge in balancing the need to protect their customers’ accounts while also letting through legitimate transactions — in other words, limiting false positives. This has brought a new dimension of AI to the fore: behavioral analytics, which use AI-driven analysis to aggregate, sort and review a broad range of cross-channel, historical and current customer behaviors to develop clear, real-time portraits of transactional risks. Under a behavioral analytics model, stolen account numbers or log-in credentials would not suffice to perpetuate an attack because other abnormal aspects of account activity would immediately be recognized.
The need for more sophisticated, behavior-based cybersecurity defenses has grown as the type and scope of fraud attacks have escalated. These include account takeovers (ATOs), in which criminals use harvested data to make unauthorized purchases or transfer funds, application fraud, which involves the creation of fake identities to gain access to credit or funds through an FI, and authorized push payments (APP) fraud, in which victims are pressured or intimidated to authorize payments to a fraudster.
Fraudsters themselves have taken to using AI to better impersonate legitimate account holders.
“Fraudsters target the personal data of banking customers by impersonation or by using stolen account information,” said Beate Zwijnenberg, chief information security officer at ING, in a recent interview with PYMNTS for one of the Playbook’s case studies. “Another way to impersonate the bank or customers is using the power of AI, with which hackers are able to create programs that mimic known human behaviors.”
“Sizable, distributed denial-of-service attacks used to be measured in gigabytes and are now measured in terabytes. Information losses were measured in hundreds and thousands of customer accounts being accessed to losses that are now counted in fractions of a billion. Fraud attempts have grown from thousands and millions of dollars to nearly a billion,” Zelvin told PYMNTS in another recent interview.
To learn more about the rising fraud risks facing FIs and how they can deploy AI in new ways to counter it, download the playbook.
Selected by EFXA