11/10/2025 | Press release | Distributed by Public on 11/10/2025 03:23
When we imagine the future of fraud, it's easy to picture Hollywood-style chaos: deepfake heists, AI super-hackers, and trust in digital payments crumbling.
The opinions expressed here are those of the authors. They do not necessarily reflect the views or positions of UK Finance or its members.
The truth is subtler and more dangerous. Fraud isn't being reinvented overnight; it's evolving steadily, with AI as its accelerant.
From craftsmanship to industrialisation
Fraud used to be artisanal. A convincing phishing email or malware campaign required weeks of scripting, testing, and coordination. AI has transformed that model. Generative systems can now produce realistic phishing templates, tailored malware, and even social engineering scripts in minutes.
The result is the industrialisation of existing crime. Large-scale, low-cost, high-conviction fraud is becoming easier to orchestrate, allowing threat actors to scale their operations like a legitimate business would.
AI tech today is driving a steep hike in fraud scams. According to UK Finance, "fraud losses in the UK reached £629 million in the first half of 2025, with over two million fraud cases reported, reflecting a 17 per cent increase." In particular, Ben Donaldson highlighted the growing threat posed by investment scams, which surged by 55 per cent to £97.7 million, constituting 38 per cent of authorised payment fraud (APP) losses.
The deepfake distraction
Much of the public conversation about AI and fraud is dominated by deepfakes. They are certainly a risk in high-value, targeted attacks, but building and sustaining a mass deepfake pipeline is costly. For fraud at scale, AI-enhanced malware and automated vulnerability scanning are far more practical and lucrative.
This is the paradox: while policymakers and media focus on the visual shock factor of deepfakes, attackers are quietly deploying AI to scan banking apps for weaknesses, mimic user behaviour, and bypass authentication at scale.
Why AI alone won't save us
If AI is supercharging fraud, can we fight fire with fire? Not exactly.
Defensive AI has limits: training and updating models is costly, and over-automation can frustrate customers with false positives. The solution is a human-AI partnership: AI sifts signals and flags anomalies, while analysts provide context and judgment, cutting investigation times without the risks of full automation.
Building resilient defences
The future of fraud defence lies not in a single technology but in integration. Financial institutions must break down the silos between fraud and cybersecurity, weaving together
device telemetry, behavioural biometrics, malware traces, and payment data into one coherent story.
This interconnected defence mesh, what we call Fraud Extended Detection and Response (FxDR), provides visibility not just at the point of transaction but much earlier, across the attacker's entire infrastructure. It's the difference between blocking a fraudulent payment and anticipating the fraud before it begins.
The AI imperative
AI is making attacks cheaper, faster, and easier to scale. But it's also reshaping defence, allowing banks to move from reactive detection to predictive visibility. The winners of the next five years will be those using AI wisely to amplify human expertise, integrate fragmented signals, and stay one step ahead of adversaries who are already thinking like industrialists.
Rather than a revolution, fraud in the AI era can be seen as an arms race of efficiency, where visibility and adaptability will be worth more than any breakthrough.
Evolution, not revolution
Looking five years ahead, fraud won't suddenly transform. It will evolve, shaped by regulation and technology. Both create friction for legitimate organisations and opportunities for attackers.
Take SuperCard X, uncovered by Cleafy's Threat Intelligence Team: Chinese-speaking groups revived carding with a modern twist-multi-stage social engineering and malware using NFC relay capabilities to execute remote contactless transactions, bypassing traditional security.
The takeaway: fraudsters' goals stay the same; their methods constantly adapt to the changing regulatory and technological landscape.