Industry experts tell us how businesses can leverage AI and Machine Learning to detect and prevent fraudulent activities more effectively.
With financial crime becoming increasingly sophisticated, businesses are under growing pressure to stay one step ahead of fraudsters. Traditional fraud detection methods often struggle to keep pace with the speed and scale of modern threats, especially as attacks become more automated and targeted.
Enter AI and Machine Learning – technologies that are transforming how organisations detect and respond to fraud. By analysing vast datasets in real time, spotting subtle patterns and anomalies, and continuously learning from new threats, AI and ML are enabling faster, more accurate fraud prevention strategies. But how can businesses make the most of these tools, and what challenges must they overcome to implement them effectively?
In this feature, industry experts share their insights on the potential of AI and ML to revolutionise fraud detection and explain how organisations can harness these innovations to protect their operations, reputations and customers in an increasingly complex digital environment.
Mike Britton, CIO, Abnormal Security

Modern attackers are increasingly weaponising AI to launch more sophisticated social engineering attacks at scale, and they appear to be thriving, with 98% of security leaders reporting AI-driven attacks on their organisations.
With tools like Generative AI, even inexperienced and petty cybercriminals can now create highly targeted and convincing phishing and business email compromise (BEC) campaigns, eliminating the typos and grammatical errors that often help end-users identify traditional attacks.
As AI becomes an increasingly valuable asset within an attacker’s arsenal, it must also be used defensively by organisations in order to keep pace. There are a couple of key areas where AI could be used to support the security team in the fight against malicious AI.
The first is using AI to improve threat detection. For instance, behavioural AI can allow security teams to learn typical user behaviours across email and collaboration apps – like their login and device usage patterns, how they typically write their messages, or who they ordinarily interact with.
With a baseline of known behaviours, AI models can then flag up deviations signalling a potential attack. This helps overcome the limitations of many traditional security solutions that rely on detecting known indicators of compromise – something many attackers have learned to omit through social engineering techniques.
There is also an opportunity to use AI to help automate repetitive workflows, enabling security teams to focus on more impactful tasks like investigating high fidelity alerts or threat hunting. For example, manually triaging user-reported phishing emails can consume hours of skilled analyst time, even though the majority of user-reported phishing emails are ultimately deemed safe.
Using automation to inspect and evaluate user-reported emails (and to automatically remove emails deemed a legitimate threat) can accelerate this workflow and free up SOC analyst time for more strategic tasks.
By leveraging AI, organisations can detect threats faster, automate responses and reduce operational strain on security teams. Security professionals can shift from a reactive to a proactive stance, stopping cyberthreats before they cause damage.
defences.
Paul Drake, Regional Vice President Sales, UK&I at Barracuda Networks

Businesses face an ongoing battle against threat actors intent on fraud – from the scammers trying to get hold of money or data they’re not entitled to, to malicious intruders attempting to gain access to places they’re not supposed to be.
For many of these cybercriminals, the easiest and most accessible route to a victim is via email. Email-based attacks like phishing can be highly effective, and the tools and techniques the attackers use are increasingly advanced thanks to well-resourced Phishing-as-a-Service (PhaaS) platforms. The use of Generative AI allows attackers to craft highly personalised and contextually relevant messages, increasing their chances of success.
Luckily, AI isnot just being used by the criminals. Most security vendors are actively implementing AI technologies into their products – many have been doing so for years – to enhance the detection and mitigation of suspicious and malicious activity.
When it comes to email, AI-powered protection continuously analyses patterns in behaviour and in message content, metadata and historical interactions. It learns what ‘normal’ behaviour looks like within an organisation and sets a behavioural baseline that allows it to immediately flag deviations.
AI tools can detect the slightest hint of fraudulent activity such as email spoofing, domain impersonation and manipulated content. From a minutely altered sender address, an unusual tone in an executive’s email, or an urgent request designed to bypass standard security checks.
The power of AI really comes into its own in the face of unknown threats, such as new malicious URLs. It enables security tools to scan, analyse and neutralise threats before the recipient is even aware of being targeted. By the time they know, they’re already safe.
Sara Hoteit, Regional Sales Lead, Backbase Middle East

In today’s hyper-connected world, fraudsters are evolving just as fast – if not faster – than the tools meant to stop them. Armed with AI, attackers are launching more complex schemes, scaling their operations and exploiting vulnerabilities in record time. That’s why businesses, especially in financial services, must embrace AI and ML not just as a reaction to fraud, but as a proactive strategy to get ahead of it.
AI isn’t just about smarter security – it’s about smarter decisions and better customer experiences. ML models can process enormous volumes of transactional data in real time, surfacing anomalies that would slip past traditional detection methods. From unusual login behaviour to location mismatches or unexpected transaction flows, AI-powered behavioural biometrics can analyse typing and scrolling patterns and spot red flags quickly, blocking suspicious activity effectively without disrupting the user experience.
Scams are also evolving. Investment scams, romance scams and impersonation attacks have become commonplace. These scams are emotionally manipulative, highly personalised and increasingly effective, and hard to prove since the account owner approves the transfer. Businesses need intelligent, adaptive solutions that analyse behaviour, context and risk holistically to verify intent – not just identity.
At the same time, AI is revolutionising Model Risk Management (MRM). Picture traditional MRM as a finely tuned machine – structured, methodical and essential for evaluating the risks behind financial models. Now, imagine AI stepping into this framework. It adds a powerful new layer, supercharging data analysis and uncovering patterns far beyond the reach of manual methods.
With AI, banks gain deeper insights into the variables that drive model outcomes, enabling more precise risk identification and fraud prevention It also powers hyper-personalised customer experiences, smarter segmentation and more effective marketing – boosting ROI across the board.
AI’s pattern recognition capabilities are especially impactful in addressing biases and ensuring fair lending practices. It can highlight correlations in data – such as those between demographics and loan approvals – helping institutions identify and correct issues early. It also supports macro-level risk forecasting by analysing historical and real-time economic signals, allowing banks to fine-tune strategies and protect customer interests ahead of downturns.
One of AI’s key strengths lies in its ability to monitor model performance continuously. In areas like investment strategy or credit risk, it can flag subtle shifts and anomalies that may indicate emerging threats -or new opportunities. Over time, AI-driven models refine themselves, staying aligned with evolving market dynamics.
But it’s important to be realistic: AI isn’t a silver bullet. Its real power comes when it’s embedded into existing processes and paired with human expertise. It enhances decision-making, strengthens detection, and makes risk frameworks more dynamic and resilient.
At its best, AI doesn’t just stop fraud – it transforms how financial institutions approach risk, trust and customer protection. It’s not just an upgrade. It’s a strategic advantage in a digital-first world.