Email is essential for business, but it’s also a prime target for cybercriminals, particularly as they increasingly leverage AI to craft convincing phishing attempts. Matt Cooke, Director of Cybersecurity Strategy at Proofpoint, tells Intelligent CISO how organisations can boost their defences, build AI risk awareness and foster a strong security culture that is tailored to specific user needs.
There is no doubt that email is a business-critical tool. It’s what helps make first impressions, and long-lasting ones too. But it’s also a primary target for cybercriminals.
In fact, email remains the number one threat vector and it’s getting increasingly difficult for people to differentiate a genuine email from a malicious one.
The role of AI in phishing attacks
One of the drivers behind this is the implementation of AI tools within a cybercriminals’ arsenal. AI gives attackers the tools to craft more believable emails, designed to trick users. It also enables them the opportunity to scale their attacks and localise in different languages.
AI is helping threat actors to craft compelling emails that recipients are more likely to believe are legitimate, written in a style that does not suggest foul play. The more believable this content is, the more likely a user is to engage, interact and proceed to ‘clicking’ through to malicious links.
One example of a particularly lucrative email attack for cybercriminals is Business Email Compromise (BEC) which, according to Proofpoint’s State of Phish research, is benefiting from AI. The research highlights that attack volume grew in countries such as Japan (35% year-over-year increase), South Korea (+31%) and the UAE (+29%).
These countries may have previously seen fewer BEC attacks due to cultural or language barriers, but Generative AI allows attackers to create more convincing and personalised emails in multiple languages. Proofpoint detects an average of 66 million targeted BEC attacks every month.
AI also poses challenges around the loss of sensitive data. If, for example, an individual was to paste sales figures or personal information into a public AI platform, there is a possibility that this information could later be regurgitated and given to someone else.
In 2024, 44% of UK CISOs surveyed by Proofpoint in our Voice of the CISO research believe that Generative AI poses a security risk to their organisation. The top three systems CISOs view as introducing risk to their organisations are: ChatGPT/other GenAI (40%), perimeter network device (33%) and Slack/Teams/Zoom/other collaboration tools (31%).
However, only 26% of UK organisations educate their users on Generative AI safety.
Leveraging AI for enhanced protection
Ascybercriminals pivot to increasingly use AI for their attacks, cyberdefenders must do the same.
One way to begin is for CISOs to build their own AI programmes to ensure staff understand the risks associated with it. The more awareness an individual has regarding these risks, the more empowered they can be to make good decisions.
In addition, collaboration between security teams and the broader technology teams is important to understand and reiterate messaging.
Human error continues to top cyber vulnerability threats but CISOs are turning to AI solutions to help.
This year, there are a large number of UK CISOs who view human error as their organisation’s biggest cyber vulnerability – 65% according to Proofpoint’s 2024 Voice of the CISO survey.
This may be attributed to the 87% of UK CISOs surveyed looking to deploy AI-powered capabilities to help protect against human error and advanced human-centred cyberthreats.
Building a strong security culture
To address human risk, organisations must build a strong security culture with better communication and engagement – addressing the security implications of AI. A strong security culture will positively influence how users approach and handle security issues and foster a sense of responsibility.
Organisations can implement a behaviour change programme tailored to user and business needs. A behaviour change programme is a systematic and structured approach to changing user behaviour and habits.
Organisations should find ways to positively reward users who avoid risky actions and proactively help keep the organisation safe, such as reporting suspicious email or activity.
There is no one size fits all approach to security awareness – organisations need to be creative and tailor messaging based on unique personas.
CISOs should also start by understanding the baseline of users’ knowledge when it comes to cybersecurity. Following this initial assessment, areas that require specific attention can be identified and further training initiated.
Through personalisation and targeting, training can be tailored for each individual to ensure maximum impact and effectiveness.
Threat intelligence can also help identify individuals being targeted in specific ways – for example, with invoicing fraud scams – and technology can be used to block these at the email gateway. Through additional training on these specific threats, organisations can layer their defences through people and processes to protect their employees.
Advocates or champions can help reduce the number of users who don’t know if security is their responsibility. By promoting best practices and providing peer support and guidance, advocates or champions can foster trust, increase engagement, and help create a positive and collaborative security culture.
Through such schemes, along with personalised messaging that resonates with individuals, organisations can create a culture of cybersecurity awareness that empowers users to be the ultimate force of defence in the era of AI.