How can prominent individuals protect themselves from threats such as catphishing?

How can prominent individuals protect themselves from threats such as catphishing?

Recently, Mimecast called on the UK Parliament to have a proactive approach to cyberhygiene. We hear from experts at SailPoint, EY and Beachhead Solutions about safeguarding techniques for individuals.

Mimecast has urged the UK Parliament to adopt a proactive stance on cyberhygiene in response to a warning from politics news website, Politico, indicating that politicians are currently the targets of catphishers aiming to compromise their reputation.

Carl Wearn, Head of Threat Intelligence Analysis and Future Ops at Mimecast, said: “In today’s digital age, any form of online or text-based interaction comes with risks, this applies to romantic connections as well, even when they’re unsolicited.

“Today, Politico brought to light a concerning issue, they reported that politicians, officials and journalists working in the UK parliament are being targeted with explicit messages in a clear attempt to compromise them.

“Catphishing isn’t anything new, it’s an attempt by scammers to use emotional manipulation and flattery to establish a connection, often exchanging fake personal details such as photos and stories that give the victim the sense they are speaking to a real person, which is exactly what has happened here.

“These sinister tactics are designed by bad actors to prey on trust and exploit human vulnerabilities, potentially leading to devastating consequences such as data breaches, compromised political influence, not forgetting reputational and emotional damage to the victim.

“While it’s encouraging to note that UK parliamentary authorities offer a ‘cyberadvisory service’, it’s evident that a substantial effort in cybersecurity training is essential to target harden sensitive targets, and vulnerable individuals, to the ever-present danger of cyberattacks. Upholding public trust demands a proactive approach towards cyberhygiene.

“Government officials must prioritise good cyberhygiene, which means securing and not over-sharing via social media accounts through two-factor authentication, regular security training and heightened vigilance against social engineering tactics. Any unsolicited communications should be treated with scepticism and heightened suspicion until any sender is verified.”

Beatriz Sanz Sáiz, Global Consulting Data and AI Leader, EY

Beatriz Sanz Sáiz, Global Consulting Data and AI Leader, EY

Generative AI is a disruptive technology. It’s innovative and helpful, but also dangerous if it ends up in the wrong hands. CIOs are deploying AI at scale to find new solutions that help their organisation; on the other hand, threat actors themselves are using it to evade detection and commit cybercrime.

The open-source nature of AI has levelled the playing field and is leading to a wave in cybercrime – which EY has forecast to reach US$10.5 trillion by 2025. So, how should CIOs react now the ‘bad’ guys also have such a powerful tool? And what strategies can be deployed to get ahead of the criminals?

Synthetic threats are very real

Deepfakes are a standout example of how GenAI is being used maliciously to devastating effect. The rapid improvement of AI-generated audio and video raises the possibility of creating and manipulating various media formats with minimal editing skills. Much of this technology is open-source, meaning it’s evolving at a rate which is near-impossible for information and security officers keep up with. In fact, Europol has estimated that 90% of online content may be synthetically generated by 2026.

Criminals are using AI deepfakes at scale to steal from organisations and their employees through spear-phishing, vishing and social engineering campaigns. Initiatives to both impersonate and target prominent individuals are becoming much more successful with this technology, resulting in a trend towards more financial scams and commercial fraud. Alongside this, sophisticated deepfake recordings are capable of tricking verification tools, such as Two-Factor Authentication or voice recognition, enabling threat actors to bypass the basic security controls most businesses have in place.

Improving infrastructure, educating people

Almost 80% of companies report that voice and video deepfakes now represent a significant threat, especially through the impersonation of high-level executives. The onus is on CIOs and security teams to adapt and protect their business, but also their people, from hackers and fraudsters.

Improving the current technology offering of existing security systems certainly helps. Having stronger computing power and more sophisticated security infrastructure will greatly improve the detection and response mechanisms needed to spot deepfake-enhanced phishing scams. AI-powered tools have become essential in both automating these complex processes and generating a positive feedback mechanism, improving these tools over time. This enables security teams to predict threats in advance, and dynamically spot vulnerabilities before an exploit can take place.

Upskilling employees to have a basic understanding of AI – and how to spot it – is essential in raising awareness of AI fraud. Meanwhile, it’s important to get ahead of cybercrime and hire people with AI and cyber-security expertise to lead proof-of-concept projects, deploy and train Deep Learning models to detect deepfake campaigns. Additionally, some responsibility falls on the shoulders of developers who make deepfake tools without consideration of how they’re applied. Policymakers must work closely with AI experts to ensure these tools are properly controlled and regulated, and greater top-down governance is key to curating a healthy security environment.

Mike Kiser, Director of Strategy and Standards, SailPoint

Mike Kiser, Director of Strategy and Standards, SailPoint

In today’s digital age, fraudsters have never been so ruthless with their tactics, and they’re increasingly using ones that are far more personal and harder to spot. For prominent individuals who are a high target for lucrative information – whether part of the C-Suite or in the public eye – the stakes have never been higher. Catphishing has typically been associated with online dating profiles but we are increasingly seeing fake profiles on LinkedIn to trick business professionals into handing over sensitive data, as well as the rise of convincing scams through AI-generated deepfakes, which can create synthetic humans wholesale.

No one is above common sense. Vigilance is something that needs to be exercised by everyone to avoid falling victim to catphishing. The best course of action is to think before you click. People must learn to scrutinise every email they receive, even if they think it’s from a trusted individual. This means hovering over links before clicking and refraining from inputting information into forms without being totally sure that this doesn’t mean handing over the keys to your digital identity in the process. Always steer on the side of caution – if something is suspicious, it probably is.   

However, individuals shouldn’t shoulder this burden by themselves – they need the support and protection of their organisation. This means being educated and supported to recognise subtle signs of an attack and to spot suspicious and ‘out of the ordinary’ requests, whether that’s on email, phone or via social media platforms. This counts for employees at all levels, from a new intern right up to the CEO. It also means employing various appropriate safeguards, such as offerings that check the validity (or known maliciousness) of an embedded links. These approaches can help protect employees at all levels. After all, a company’s security posture is only ever as strong as its weakest link. Hackers often look to gain entry by targeting different user access points, so the protection of every employee – every potential conduit – is essential. 

Moving forward, businesses should also consider using stronger forms of digital identity security to keep threats at bay. For instance, verifiable credentials, a form of identity that is cryptographically signed proof that someone is who they say they are, could be used to ‘prove’ someone’s identity rather than relying on sight and sound. If a deepfake scam or phishing attack is suspected, proof could then be provided to ensure that the person in question is actually who they claim to be.

Cam Roberson, VP of Channel, Beachhead Solutions

Cam Roberson, VP of Channel, Beachhead Solutions

Increasingly sophisticated, deceptive and personal cyberthreats like catphishing and spearphishing require a layered security strategy that includes technology, education and policy. On the technology side, you need advanced email security that can be your first line of defence for detecting and blocking these phishing attempts before they start. You also need – and it should be a given at this point – Multi-Factor Authentication across all critical accounts and systems (whether personal or business). Getting a step more technical, regular vulnerability assessments (and patching schedules) can be established to mitigate attackers’ potential entry points.

Control of who has access to what data is also critical to the prevention part of these attacks. I call it ‘reducing threat surfaces’. If someone unwittingly becomes a phishing target, ensuring they can only access necessary data limits the threat’s scope. It’s critical to restrict access to just data that’s required, which can be done using various authentication methods, layered encryption and, when necessary, remotely wiping data.


While the right technology is mandatory, the human element can’t be overlooked – especially with an emotion-tugging attack vector like catphishing. Comprehensive security awareness training is often a critical component to avoid trouble later on. Businesses should make such programmes mandatory and be sure that their organisations’ most prominent individuals (who are perhaps most likely to be approached) don’t ignore the responsibility. This training should educate folks on recognising and responding appropriately to the latest social engineering tactics they are always evolving – catphishing included. Often, simulated phishing exercises can reinforce this training and better enable individuals to identify and report anything suspicious before it starts jeopardising themselves and their company.

Policies and procedures are the third leg of preventing catphishing and similar threats. There needs to be a clear plan for how to handle sensitive information and communication channels. Policies should outline strict guidelines for verifying the authenticity of requests for data, financial transactions and anything else that absolutely cannot go to someone who shouldn’t have access. Everyone, but perhaps prominent individuals in particular, should also be especially cautious when sharing personal information online and really ought to limit their digital footprint where possible, to minimise potential attack vectors.

By implementing a layered security strategy that combines technical controls, employee education and thorough policies, individuals can significantly reduce their exposure (and their company’s exposure) to catphishing and other social engineering threats, safeguarding sensitive information and assets.

Click below to share this article

Browse our latest issue

Intelligent CISO

View Magazine Archive