Information Warfare : How It Works And How To Protect Yourself?

Information warfare mailfence

Information warfare, or infowar, aims to employ information and communication technologies (ICT) to harm an enemy country. Hackers and governments can benefit from information war. But did you know it can also aim a specific company or person?

What is information warfare (or infowar)?

Information warfare encompasses several techniques:

  • Destruction or disruption of the adversary’s communications and/or information systems. The attacker can, for example, jam military communications or the communications systems involved in his opponent’s weaponry. He can also launch attacks (physical or cyberattacks) on the communication systems of civilian services (e.g., airports, financial markets, hospitals) to cripple these infrastructures.
  • Gathering key information about the adversary, its strategies and maneuvers. For example, this year, the Ukrainian army managed to connect Russian military communications networks to its own networks. This allowed them to spy on Russian communications, and to suspend them at a decisive moment to prevent the transmission of important information. Spying and analysis of personal data are also part of this.
  • The neutralization of certain media (television, radio), Internet sites or computer networks of the adversary. A belligerent can jam his opponent’s television transmissions, or launch Distributed Denial of Service (DDoS) attacks. These DDoS attacks aim to neutralize a computer, a network or a website by overwhelming it with so many computer requests. The infected computer, network or website gets swamped by their number and becomes ineffective.
  • Last but not least, the spreading of incorrect information or propaganda to manipulate the opponent’s public opinion. In some cases, attackers hack television channels or Internet sites to broadcast misinformation messages. This happened recently. When the hacker collective Anonymous hijacked the websites of Russian media outlets including the news agencies TASS and RIA Novosti, and major newspaper titles such as Kommersant. The collective temporarily masked their respective homepages with a message criticizing the Russian attack on Ukraine.

Let’s focus on the latter and its psychological manipulation tactics.

Misinformation, a potent component of information warfare

The psychological operations of information warfare are essentially based on misinformation and propaganda. We call misinformation the deliberate spreading of false or partially incorrect information. This is the famous “fake news”, information created from thin air or modified to influence public opinion.

The aim is to destabilize an enemy country. How? By disseminating fake news and rumours. It will create panic and undermine the morale and trust of the population. But a State or an organization can also use disinformation on a domestic level to promote an idea or a concept (lobbying), or to spread propaganda messages.

Different types of media can be used: newspapers, television, radio, but above all social media and other internet sites. Indeed, we live in an increasingly connected world. The Internet allows easier access to all kinds of information. But the downside is that it also facilitates the spread of misleading information.

Who are the actors of the information war?

Infowar is not new. In the past, propaganda has sometimes been used as a weapon of war. During World War II, both sides used propaganda to win over the population.

However, what gives new strength to this tactic today is the exploitation of social networks and algorithms. It boosts the spread and impact of fake news and misleading messages.

Technology also allows the construction of more realistic fictional reports, thanks to “deepfakes“. Artificial intelligence elaborate these ultrasophisticated fakes. For example, it is possible to produce a video of a fake interview, base on a real one. Hackers can attribute polemical statements to political figures, and spread the fake video to discredit the person or shock users.

This technique has been used on several occasions, especially to influence elections in several countries. Russia used it during the 2016 and 2020 US presidential election. In 2016, Russian agents exposed nearly 126 million Americans to fake news through social networks. Their aim was to arouse distrust of the Democratic candidate Hillary Clinton, and to push voters to vote for Donald Trump.

After that, it emerged that Russia had also been spreading disinformation to encourage Britons to vote for Brexit. But Russia is far from being alone in engaging in this kind of activity.

In fact, many governments are suspected of organizing peddling campaigns to influence the political debate in other nations or within their own countries. The United States, which often blames China and Russia, is itself a leading disinformation player. Recently, it ranked fourth in a report released by Facebook listing powers suspected of pursuing such strategies. Russia ranked first, followed by Iran, and then The United States, ex-aequo with Burma.

How do infowar players proceed?

The disinformation operations are based on the use of content issued by fictitious journalists or media that imitate media. The fake reports or stories, that often rely on sensationalism to get people’s attention, fool the less wary users.

They are thus encouraged to share and spread this misleading or inaccurate information within their community.

To multiply the power of this echo, the backers of infowars may use troll farms (or troll factories). These are groups of hackers recruited to compose and spread disinformation messages on social networks on a massive scale.

Troll factories can also involve bots, i.e., programs that automate message broadcasts. Bot farms sometimes have several thousand fake social network users. These fake accounts have a profile picture and realistic personal information. Often acting within specific groups, they simulate the activity of human social network users: they can like and share posts.

Worse, they often use artificial intelligence to generate comments and personalized messages to better capture the attention of targeted human users.

Finally, in some cases, troll factories give legitimacy to their fake news by inviting well-known media outlets or personalities to spread it. This technique is called “disinformation laundering“.

A host of cognitive biases at work

Various cognitive biases reinforce infowar campaigns, which is why they are so efficient. There are several deviations in the way we process and give credence to the information that comes to us. Here are some of the biases that are most conducive to the spread of misinformation:

  • Confirmation bias. This cognitive bias is the tendency of most of us to be more interested in information that confirms our beliefs than in information that might challenge them.
  • Availability bias: We tend to base our reasoning on information that we have already memorized, rather than trying to update it with new information.
  • Shared information bias: we give more time and credit to information that comes from our loved ones than to information that comes from outside sources (the official media, for example).

The echo chambers

Cognitive biases end up locking people in an echo chamber. As time goes by, their opinion becomes more and more polarized, and they end up developing a deep distrust of public institutions and the press. As a result, public opinion appears increasingly divided.

Social network algorithms do the trick. Many offer a personalized content recommendation feature by default. This feature aims at encouraging the user to stay as long as possible on the platform by submitting content similar to the ones that got them engaged. This is where our cognitive biases come into play: these similar contents are those that conform to our pre-existing opinions, or that come from our relatives. Users are constantly bombarded with content that confirm and reinforce their ideas.

Disinformation tactics in an information war is precisely aiming for a fragmentation of society. A nation’s enemies seek to reinforce the divisions that fracture a society in order to weaken it even further.

In the United States, this situation had dramatic consequences on the COVID-19 pandemic. A large part of the population, relying on false information, refused to wear the mask, multiplying in a tragic way the number of contaminations and victims of the virus.

infowar mailfence

Businesses are concerned by information warfare

People often assumed that governments or politically motivated groups wage these information wars. It takes a powerful force to create a troll farm such as the Internet Research Agency, a Russian organization that interfered in the 2016 US election.

However, the anti-vaccine fake news that flourished during the COVID-19 crisis, as well as the misleading messages seeking to disprove global warming, remind us that any topic can be subject to information warfare.

Businesses are no exception either, and a disinformation campaign can directly target them, or make them a collateral victim.

The bad news is that, unlike cyber attacks, this type of campaign is easy to implement, as it does not cost much.

Indeed, on the dark web, for example, many individuals are willing to sell fake followers and spread any information for money. Jigsaw, a company close to Google that focuses on cyberthreats and these misinformation phenomena, claims that anyone can afford a devastating fake news campaign for a potential rival for just $1,000.

Big brands, which are the most recognizable, are easy targets. But small and medium-sized businesses are not spared either; now, any business is susceptible to information warfare.

What are the risks?

On the Internet, information is king, and it proliferates. Malicious individuals, possibly paid by your competitors or your enemies, can try to launch an attack to damage your reputation. They may spread fake news intended to inspire disapproval against you, or information gleaned from your directories, such as contracts, photos or confidential documents. And even if you manage to remove the source of the spread, you can’t stop its dissemination.

A hacker can also break into your computer system to steal confidential documents for cash. If you are a journalist, you are particularly concerned by this type of threat.

Another possibility is spoofing: hackers can copy your email address, your domain name to spoof your electronic identity. Most of the time, they seek to access your bank and Internet accounts. But they can also connect to your social network accounts to post false messages and try to damage your reputation.

You can’t rule out espionage either which is also a component of information warfare. A loophole in your business’ IT system may allow a clever hacker to access your secret documents, commercial contracts, prototypes or work contracts.

How to protect yourself against these threats?

It is therefore essential to protect yourself against hacking and data theft and to defend the privacy and anonymity of your data. This requires solid cyber-protection measures to avoid intrusions and spoofing:

Many journalists or dissidents are happy to use Mailfence and the most secure protocols for their sensitive communications. But if you are a business or an organization, you will also appreciate the use of a complete secure office suite. It includes not only an email, but also a calendar, a meeting planner, a contact and group manager, a document storage and creation platform and a chat tool, all of which are secured by end-to-end encryption.

Get your secure email

Interested ? For more information on Mailfence’s secure email suite, please do not hesitate to contact us at support@mailfence.com

– Mailfence Team

Share This Article

You may also like...