New EY research reveals cybersecurity fears are on the rise among US workers

New EY research reveals cybersecurity fears are on the rise among US workers

Survey uncovers growing generational preparedness gap as Gen Z and Millennials continue to fall short of enacting safe cyberpractices.

Widespread concerns are growing among US employees about escalating cybersecurity threats in the workplace, with 53% worried their organisation will be the target of a cyberattack and a third (34%) worried that they may be the ones leaving their organisation vulnerable due to their actions, according to new data from Ernst & Young.

Notably, fear of exposing their organisation to a cyberattack is particularly high among younger generations, with Gen Z and Millennial employees less likely to feel equipped to identify and respond to cyberthreats compared to their older colleagues. The 2024 Human Risk in Cybersecurity Survey is a study of 1,000 employed Americans across public and private sectors that follows the initial 2022 analysis by EY US and explores the current state of cybersecurity and changes over time, revealing key insights for business leaders on cybersecurity awareness and practices. This year, EY US expanded the study to analyse employee perception of the role of AI in escalating threats, finding 85% of workers believe AI has made cybersecurity attacks more sophisticated, 78% are concerned about the use of AI in cyberattacks and 39% of employees are not confident that they know how to use AI responsibly.

“With new threats emerging on a near-constant basis fuelled by geopolitical tensions, shifting regulations and the rapid integration of new technologies, including AI, the risk landscape has become even more complicated,” said Jim Guinn, II, EY Americas Cybersecurity Leader. “Want to secure your organisation today and in the future? Put humans at the centre of your cyberstrategy and enlist your people as protectors on the frontlines, arming them with knowledge, training and a dose of healthy scepticism about all digital interactions.”

Closing the Gen Z cybersecurity preparedness gap

Similar to the 2022 findings, the latest EY US cybersecurity study highlights a persistent gap in preparedness across generations, with younger workers continuing to fall short of exercising safe cybersecurity practices more so than older generations.

In fact, Gen Z is losing confidence in their ability to recognise phishing attempts – one of the most common and successful tactics of social engineering attacks – and is most likely to admit to opening a suspicious link. And now, with the power of AI-generated phishing emails, spotting malicious links and content is getting even harder.

Although they are a digital-first generation, only 31% of Gen Z feel very confident identifying phishing attempts, marking an alarming nine percentage point drop from 40% in 2022, and 72% said they have opened an unfamiliar link that seemed suspicious at work, far higher than Millennials (51%), Gen X (36%) and Baby Boomers (26%).

Nearly two-in-three Gen Z and Millennial workers are particularly fearful about repercussions surrounding cybersecurity, including 64% of Gen Z and 58% of Millennials who fear they would lose their job if they ever left their organisation vulnerable to an attack. Younger generations are also more likely to not fully understand what their organisation’s process is to report suspected cyberattacks, even though their organisation has a process in place (39% Gen Z and 29% Millennials vs. 19% Gen X and 15% Baby Boomers).

However, it’s not all doom and gloom. Despite concerns around their abilities to prevent an attack, EY research indicates that Gen Z workers increasingly consider themselves knowledgeable about cybersecurity (86% vs. 75% in 2022), pointing to opportunities to better equip younger workers to turn this knowledge into confidence by investing in upskilling and training that caters to their unique experience as true digital natives.

Cultivating a culture of cyber confidence

The rapidly evolving nature of AI has made it essential for organisations to adapt training protocols regularly and remain committed to providing frequent, up-to-date training that addresses the latest AI-driven threats and cybercrime trends. A vast majority of employees (91%) say organisations should regularly update their training to keep pace with AI, especially as AI’s role evolves in cyberthreats; but only 62% say their employer has made educating employees about responsible AI usage a priority.

“Cybersecurity training and attention from leaders across the C-suite contributes to the development of a strong security posture within an organisation,” said Dan Mellen, EY Americas Consulting Cybersecurity Chief Technology Officer. “When security practices are ingrained in the company culture, employees are more likely to prioritise security in their day-to-day activities and proactively report potential security incidents.”

The EY Cybersecurity team advises C-suite and senior business leaders to incorporate the following leading practices in their cyber agenda to cultivate a strong and confident security culture within their organisation:

  • Build robust training exercises that are reinforced all year-round. EY US research finds employees who are ‘rusty’ on cybersecurity training are most fearful of using technology at work. Conversely, 94% of employees who received training within the past year say cybersecurity is a priority to them.
  • Drive employee engagement with gamification. Leaderboards and multiplayer features in gamified training programmes encourage healthy competition among employees, driving them to perform better. Gamification is particularly effective for anti-social engineering campaigns if it addresses the natural human curiosity that often leaves employees vulnerable.
  • Partner, don’t police. Organisations testing their employees to see if they handle cybersecurity threats appropriately can inadvertently turn cybertraining into a ‘gotcha’ moment. Position cybersecurity protocols as working in partnership with their employees, not as police, by embracing a ‘see something, say something’ policy instead. Make the process for reporting potential attacks and vulnerabilities simple enough that workers across all generations can seamlessly integrate it into their day-to-day lives.
  • Incorporate hands-on AI training protocols. Including protocols that incorporate hands-on training for the use of AI in the workplace offers employees exposure to fundamental capabilities and risks. Having firsthand experience using new technologies like Generative AI unlocks a new level of understanding and drives defensive thinking.
  • Lead by example with responsible AI: Thirty-nine percent of employees are not confident that they know how to use AI responsibly, according to EY US research. As stewards of their organisation, C-suite and senior leaders must embrace transparency surrounding how AI is developed and deployed enterprise-wide and demonstrate responsible AI practices themselves to mitigate risks.

Browse our latest issue

Intelligent CISO

View Magazine Archive