The 2019 report of the Computational Propaganda Research Project (CPRP) at the University of Oxford has found that organized social media manipulation campaigns have taken place in 70 countries, up from 48 countries in 2018 and 28 countries in 2017, in order to shape domestic public attitudes. Authoritarian regimes are early adaptors, using computational propaganda to suppress fundamental human rights, discredit political opponents, and drown out dissenting opinions. In 2019, 47 of those states have used state sponsored trolls to attack political activists and journalists on social media through mass reporting of content and accounts to get them removed or suspended. In short, state cyber troops’ domestic objectives are one or all of the following: (1) spreading pro-government propaganda, (2) attacking the opposition or mounting smear campaigns, (3) distracting or diverting conversations or criticism away from important issues, (4) driving division and polarization, and (5) suppressing dissent through personal attacks or harassment.
Irrespective of domestic agenda, some more sophisticated state actors started using computational propaganda to mount foreign influence campaigns. The CPRP report identifies seven countries that do so: China, India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela. However, recent takedowns by Twitter and Facebook have also added the UAE and Egypt to the list.
Thanks to pressure from the U.S. public, social media platforms have shifted their focus to state-sponsored global influence operations, which are objectively easier to spot than localized ones, and consequently more embarrassing to the state actor once caught. This has led state actors to start hiring private communications firms that offer computational propaganda as a service, since it helps with deniability and capacity. The CPRP report has found evidence that, out of the 70 countries surveyed, 25 used or continue to use such private contractors.
Disinformation as a Service
Among other things, social media changed the way companies and corporations do their public relations and strategic communications, which in turn changed the scope of services offered by their respective marketing communications agencies.
Such agencies started to offer online reputation management and influencer marketing to promote and protect their clients online, which with time provided them with the tools and know-how to conduct disinformation campaigns against their client’s rivals, whether that client was a state actor or a private corporation. The highly customizable and cost-effective nature of such campaigns created a chilling new threat for local and multinational companies. The annual Kroll fraud report , which surveyed 588 large companies across 13 countries, has found that 84% of businesses feel threatened by digital hearsay and internet disinformation, after experiencing their effects either personally or witnessing it being used against competitors in the market.
For example, Metro Bank’s share price dropped 11% last May5 due to false rumors, circulated on WhatsApp and Twitter, warning of the Bank’s impending financial collapse and encouraging customers to empty their accounts in the U.K. The bank had to go on Twitter to reassure its customers of its financial health. In India, an e-commerce firm called Infibeam Avenues lost 71% of its market value in a single day last October, after a WhatsApp message circulated alleging corporate governance issues in the company. Such false market rumors and fake news campaigns can be devastating to companies if timed correctly, and given the legal liability associated with them, most legal communication firms do not engage in them. Luckily for those seeking those services, there are a number of “disinformation-as-a-service providers” available for hire in cybercriminal forums, providing their clients’ complete deniability, as researchers at Insikt Group and Recorded Future uncovered last week .
Their research has shown that there is an easy-to-navigate market for anyone looking to buy disinformation services, with Recorded Future analysts identifying two established threat actors operating in the Russian-speaking underground forums offering these services. Those providers were “exceedingly professional” and “amenable to feedback,” providing their “client” with samples of past work, made prompt amendments to the content, and offered a variety of services from fake traffic to bots to publishing articles in media sources that ranged from dubious clickbait websites to those of Mashable, Financial Times, and Buzzfeed.
Recorded Future researchers hired both actors to conduct parallel campaigns promoting and attacking a fictitious company, and it ended up costing them $6,050 in total, which is terrifyingly cheap for both state and private clients. The UAE government outsources a lot of its disinformation operations for that reason; it’s cheaper than building local capacity, and it allows them to scale quickly in times of crisis. The downside of using such private contractors is that you get what you pay for. Once they are caught and traced by the platforms, their infrastructure and accounts are shut down, and their content is analyzed for identifying the disinformation campaign’s target, and subsequently their client.
Not all such providers cover their tracks adequately, with some of them using the same Facebook ad-buying accounts to pay for different campaigns across their network of pages, allowing researchers to shut down their entire network once one campaign is noticed. Once exposed, their clean-up efforts can also be sloppy, with one such company running a UAE campaign erasing their entire online presence once exposed, but Google still cached their information, including their address, at the Abu Dhabi government media hub TwentyFourFiftyFour.
It is worth noting that such networks get shut down only in foreign influence campaigns run by state actors because they are easier to identify. Disinformation campaigns against business competitors launched by private companies or contractors for corporate clients usually fly under the radar for social networks, with the client usually remaining anonymous and liability-free, while disinformation campaigns against environmental regulation laws, such as the one run by CTF’s network in the U.K. on behalf of corporate clients, are not illegal and are highly effective. However, the political unrest that plagued Egypt this past month marked a disturbing development and a harbinger of things to come, as a third group outside state actors and private providers has utilized those methods and tools effectively to destabilize the Egyptian state and economy: the transnational movements.
Enter the Transnational Movements
Transnational movements are a collectivity of groups with adherents in more than one country, cooperatively engaged in efforts to promote or resist change beyond the bounds of their nation, and committed to sustained contentious action for a common cause or a common constellation of causes, often against governments, international institutions, or private firms. The most prominent examples of such movements launching disinformation campaigns are the international white supremacist movements, as well as the Muslim Brotherhood and ISIS. However, such campaigns were historically launched for recruitment purposes for those fringe movements, and not as a method for targeting the destabilization of a state, which is what makes this development very worrisome. Ultimately, the objective of state-sponsored cyber troops is power, while the objective of private disinformation services providers is money. However, the objective of the disinformation campaign of this transitional movement was chaos, on a national level.
It all started on September 2, with a whistleblower video made by an Egyptian building contractor named Mohamed Aly, whose construction company subcontracted projects for the Egyptian military. Appearing from Spain where he escaped to, he detailed examples of government graft, corruption, and mismanagement of public funds to build military hotels and presidential palaces worth millions of dollars for Egyptian President Abdel Fatah el-Sisi’s family and friends. The video went viral, with millions of views and thousands of shares, prompting him to continue making videos with more examples of military financial corruption, which dominated the conversation all over Egypt for the entire week and sparked all kinds of outrage online against the Egyptian state, which fumbled its response to the videos, adding fuel to the fire.
The following week, Aly made a call for demonstrations on Friday, September 20, with the hashtag “EnoughSisi,” which reached more than one million tweets on Twitter in a few days. Facebook groups sprang up all over Facebook, with millions of followers calling for a revolution against the state, and were filled with other videos of “leaks” by men claiming to have inside sources that the military wants to remove Sisi from power and is awaiting massive public demonstrations on Friday to provide it with the political cover to remove him. The demonstrations were a modest success, with hundreds demonstrating all over Egypt.
The government countered the next day with mass arrests and disruption to Facebook Messenger and Facebook image CDN servers, as well as BBC News and other news sites in Egypt. By Sunday, Twitter had also become intermittent in Egypt, while network data indicated that 40% of Facebook Messenger users were experiencing difficulty connecting at any given time, as well as users of Skype and other VOIP services. By Tuesday, the Egyptian stock market lost 11% of its value as more calls for protests led to mass selling by foreign investors.
The Egyptian state continued to arrest people haphazardly, with more than 3,000 people arrested, and they spread security forces all over downtown Cairo and other places of protests, eventually causing enough fear to dissuade further demonstrations. However, by then the damage to the country economically and to the reputation of the President and the military was done, with the state scrambling to find policies that absorb the suppressed anger and resentment that the population showcased to avoid future instability.
A Strange Alliance
The disparity between the anti-government hashtags and the participants led researchers to analyze the tweets comprising them and to discover that, between authentic tweets from regular citizens and leftist activists both locally and abroad, there were tweets made by bot accounts launched by pro-ISIS groups urging Egyptians to take down the infidel president, as well as repeatedly posting propaganda videos on trending hashtags.
Other researchers have noted that a large number of newly created fake accounts were posting via the IFTTT application, while other analysts say that huge numbers of tweets have been sent from previously dormant accounts as well as newly created ones in crude attempts to dominate the conversation. Those 1,700 newly created accounts were responsible for 378,000 tweets alone, with their creation located in Egypt, Qatar, Turkey and Kuwait. The Facebook groups and pages posting the videos of the leaks and content supporting the Muslim Brotherhood were operated from the U.K., Turkey, and Ukraine.
Additionally, many of these groups and pages listed a different name and activity when collecting their millions of followers, before changing the name and objective overnight, suggesting that the massive support of these campaigns did not reflect reality. For example, one of the Facebook groups was originally titled Montaqibat Gamilat Afifat (Beautiful and Chaste Niqab Wearers) when it was created on Jan 18, 2019, and then three days before the planned demonstrations on Sept 17, 2019, the group changed its name to #Enough_Sisi.
Whether those pages and groups were always created with this intent or were sold to Muslim Brotherhood members by the original creators remains unclear. Also, given the no love lost between the Muslim Brotherhood and ISIS supporters and the different methods, vehicles, and tactics they used, it is clear that they did not plan this together. What is clear is that that both Muslim Brotherhood and ISIS supporters separately conducted their own computational propaganda campaigns, using human accounts and bots, spreading disinformation and amplifying the appearance of dissent that destabilized the market, people, and the government with precisely that purpose.
The Muslim Brotherhood and ISIS might be examples of radical fringe transnational movements, but they are the first of their kind that have used the strategies, tools, and tactics of computational propaganda campaigns usually reserved for state actors and private providers to destabilize a country, both politically and economically. They also won’t be the last.
Other transnational movements, whether political, social, or environmental, will soon realize the advantage such tools and tactics—or even providers—might give to their cause or to achieving their objectives. Environmental activists could start a computational propaganda campaign against a multinational that they target, with the goal of destroying their stock market value. Social activists could do the same to a financial institution or to an investment bank that they deem evil. Political opponents of President Trump might actually start spreading fake news about him in order to erode his support among his base in the coming 2020 elections. Campaigns can be started by believers united in a cause across nations, a million Davids throwing rocks at various Goliaths, or simply by an investor that shorted a company’s stock and needs it to fall to make a profit. The democratization of defamation, division, and destabilization, globally, is in the hands of any citizen.