04/13/2026 | Press release | Distributed by Public on 04/13/2026 02:12
AI is now on both sides of the wire. The banks that win will run systems capable of explaining themselves.
The opinions expressed here are those of the authors. They do not necessarily reflect the views or positions of UK Finance or its members.
Fraud teams feel the strain. Criminals automate and are agile. Defenders inherit years of layered controls, separate tools, and manual workarounds. AI has cut the time it takes to design and launch a sophisticated attacks, scaling in volume and targeting the minutest of vulnerabilities. The risk sits in that gap.
The new speed of attack
Threat actors move faster now because they can. They have no approval chains or compliance sign-off. They can test an idea, refine it, and relaunch before a bank has closed its morning meeting. Meanwhile, a bank may need weeks to adjust a rule set and push it into production while balancing their response with customer experience and legal compliance. Generative tools are producing flawless phishing content. AI voice models are powering call-centre scams. Malware evolves dynamically to obscure its own signature. The result is fraud that behaves more like software development than crime: agile, iterative, and continuous. Defenders, meanwhile, are under attack with their systems built around static data and tools which at times do not offer a seamless flow of information.
Detection that scales with the threat
Fraud operations rarely have the luxury of expanding at the same rate as their workloads. When a confirmed case reveals a particular pattern - remote-access manipulation, for example - the same logic should automatically update every related customer journey. That ability to reuse verified knowledge turns defence into a shared memory, rather than a collection of isolated playbooks. Scale starts serving the defenders rather than overwhelming them.
Continuous understanding, not snapshots
Most tools still assess a moment in isolation: one user, one device, one transaction. The real story sits across time and transaction flows. AI could aid maintaining a continuous picture of what is typical for each account and notice when behaviour starts to diverge. That constant context enables detection earlier - before the anomaly becomes a loss event.
Replacing probability with evidence
Risk scores have long felt sophisticated, but they are guesses wrapped in statistics. They rarely explain why something looks wrong. With inter connected data and datasets, AI can Aid reconstruct what actually happened, linking device telemetry, session data and transaction history into a single chain of events. The outcome is not a percentage or a colour code. It is a sequence of evidence that makes sense to an investigator and can be defended to an auditor, a regulator, or a customer. That is what gives explainable AI its practical value: it does not replace human judgment, it supports it with a complete set of facts.
How fraud operations will change
Fraud teams will not never vanish, but their work will evolve. AI agents will take on the repetitive tasks: compiling evidence, surfacing linked cases, testing rules, and validating governance requirements. Analysts still make the call, but they no longer have to rebuild the picture from scratch each time. That frees capacity for the judgments that genuinely require human expertise while providing a customer with a consistent yet personalised experience.
Safe autonomy starts with trustworthy data
Autonomy is only safe when the evidence behind it is sound. When AI acts on causality rather than correlation, it becomes something teams can genuinely rely on. Generative AI can already help analysts summarise evidence and recommend actions, but that confidence only holds when the underlying data is verified and connected. The institutions that succeed will not be those that chase every new model. They will be the ones with systems capable of explaining their logic, not just keeping pace with attackers, but out-reasoning and therefore out evolving them. The opinions expressed here are those of the authors. They do not necessarily reflect the views or positions of UK Finance or its members