We’re in a “Golden Age” of disinformation and misinformation. While both are bad, disinformation is arguably worse because it’s intentional. It’s the spreading of false information with purpose, and there’s no moral ambiguity about it. Misinformation, in contrast, isn’t intentional. It’s more a case of information being shared without checking the accuracy first or of something getting lost in translation. Misinformation is like that old children’s game of Telephone, whereas disinformation is done with very clear intent to spread things that are not true.
Social media is playing a significant role in these issues. It’s not just that things are slipping through filters. As we’ve seen with the recent Facebook leaks, these massive companies have been actively turning a blind eye to combatting disinformation.
How disinformation affects individuals and their employers
There’s a large portion of the workforce struggling to make decisions due to the amount of disinformation and misinformation available. This comes at a time when we already have a massive amount of employee transition due to the Great Resignation. For example, we are seeing a lot of stories about employees leaving jobs because of vaccine mandates – when the fact is that their decision about these mandates may be shaped by misinformation or disinformation.
People are just consuming and reacting, often hastily, which can cause major issues for both individuals and companies. In some cases, it might even lead to a situation where an employee retaliates against their employer, for instance.
And bad actors, who never let opportunities go to waste, are taking advantage of disinformation. They’re leveraging news cycles and heightened political sentiment, among other triggers, to prey on people. For instance, a false Facebook post claimed that then-President Obama had outlawed the Pledge of Allegiance in public schools. The story, created out of whole cloth by a fake news site, generated more than 2 million interactions.
It can also take the form of, say, a business email compromise attack. Bad actors might take advantage of a recent data breach by sending an email along the lines of, “Your account was compromised” or “You’re locked out of your account – press here to change your password.” Essentially, these cybercriminals are taking advantage of the individual’s decision cycle to get access to corporate computer systems, sensitive information, bank accounts and more.
Combatting disinformation
Disinformation, as stated above, is the intentional spreading of wrong information for nefarious purposes. However, disinformation can actually turn into misinformation that’s spread organically when people don’t realize it’s disinformation. There’s also the balance of personal freedoms of speech that must be considered. How do you suppress disinformation without suppressing misinformation?
There’s a lot of conversation about holding social media platforms much more accountable, of course. This has certainly played out in the past several months and especially in light of the latest Facebook leaks. But we also must acknowledge that the problem is quite difficult due to balance with personal freedom – it’s not black and white.
What this means is that companies (not just the big social media players) must take on a bigger role in this fight. It comes down to protecting employees as well as their brand from the potential fallout of disinformation campaigns. And that requires paying much more attention to social media. But specifically, when it comes to information being spread on social media, you can’t just deal with the content. It goes deeper.
Evaluating authenticity in an inauthentic world
Beginning to root out the spread of disinformation involves analysis of the entities and profiles – the accounts involved in these conversations. Just looking at the conversations and social media posts themselves isn’t enough. Even if you flag a post and it gets deleted, that only scratches the surface of the problem.
You have to evaluate the characteristics of the accounts involved in these conversations. How new are they? What does their network look like? Do they have any other characteristics that look inorganic or authentic? If the profile is only two months old and it already has 50,000 followers and the person is not a celebrity, these are signs of an inauthentic account.
This kind of analysis is necessary but it’s admittedly time-consuming. Not all companies can dedicate resources to do this analysis. Even the larger, marquee companies are struggling with it. This is where new technologies can come in – tools that help conduct this kind of analysis and authenticity checking. These tools use open-source information to create personal and corporate intelligence and risk assessments across the internet, including social media.
They can conduct threat intelligence monitoring and digital risk protection of reputation, operations and information systems without human intervention, freeing up resources while providing more comprehensive results with greater accuracy. They provide risk-based assessments and insights that corporations can use to make decisions and determine what they need to be the most concerned about – such as stopping socially engineered cybersecurity attacks.
Acting on (dis)information
Misinformation and disinformation are a plague infecting the digital landscape. It’s clear that social media giants aren’t going to police themselves, so companies must take their own stand.
Business leaders must create and enforce social media policies that protect them and their employees while not infringing on free speech rights. They must also develop a system, whether run in-house by staff or by a service, of carefully monitoring all online information that malicious actors could use against them. When disinformation has the potential to become a significant risk to corporate reputation, physical safety and more, it’s time to act.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.