Hardly a day goes by without media and sometimes government claims that Russia has been utilising social media tools to spread fake news and misinformation to influence everything from elections, mining approvals and even Brexit.
But Russia is just the tip of the iceberg for this issue with regular misinformation campaigns and attacks observed worldwide from activists, political and business rivalry, and even kids looking for kicks.
But how hard is it to spread this sort of misinformation?
Well it appears to be very easy indeed with easily available tools and services which actors can buy or rent as needed to get up and running quickly. The barrier to entry is very low indeed.
This means that for example twitter accounts can suddenly come out of nowhere and attract tens of thousands of followers and retweets in a matter of hours associated with particular misinformation campaigns.
For example, in October 2016 an ideologically motivated hacktivist group called Anonymous Poland published documents it claimed it had stolen from a breach of the Bradley Foundation, a U.S. charity. Over the ensuing week almost 15,000 nearly identical tweets posted by approximately 12,000 Twitter accounts, featuring links to tweets about the Anonymous Poland breach were identified.
Disinformation campaigns can take many forms; however, they generally follow three distinct stages: 1) Creation, 2) Publication and 3) Circulation. For each stage, there are countless online tools, software and platforms to allow attackers to create credible and effective disinformation campaigns.
In recent years, there has been a growth in toolkits and services designed to propagate the spread of misinformation – available for just $7 – that are aimed specifically at causing financial and reputational damage for companies and governments.
There is a myriad of drivers that will affect how disinformation campaigns evolve in the upcoming years. Based on the drivers and assumptions shown below, it’s almost certain that disinformation will continue; the geopolitical situation shows no signs of easing, and there is plenty of sociocultural unease to exploit. While there will be continued efforts to remove suspicious content from social media sites, the low barriers to entry and innovation of threat actors will lead to an increase in disinformation. Moreover, this is not just a risk for political parties in 2018; disinformation affects businesses and individuals too.
So how do we combat this sort of threat?
There are some steps businesses can take to lessen the risk of disinformation impacting their businesses. These include:
- Combat domain spoofing – organisations should proactively monitor for the registration of malicious domains and have a defined process of dealing with infringements when they occur. An agile and scalable takedown capability is critical for combating domain spoofing
- Combat the ‘bots’ – monitor social media for brand mentions and seek to detect the ‘bots’ though it’s not always immediately obvious, there are often clues such as looking at the age of the account, the content being posted, and the number of friends and followers
- Monitor forums for information that could manipulate the share price – organisations should search for mentions of their brand or staff across forums, which could be instances of malicious actors spreading disinformation
- Keep an eye on trending activity – monitor trending activity as it relates to an organisation’s digital footprint and potentially identify disinformation activity
Fake news and disinformation is not a new phenomenon, and it will not be going away anytime soon and indeed the continuing digitisation and move to less traditional media sources is only likely to accelerate the issue further. The blurring between truth and fiction is often difficult to ascertain but businesses need to ensure they do all they can to monitor and protect their own reputations to ensure next time it is not them in the crosshairs of the attacker.