Good bots, bad bots and the humans in between

26 Mar 2020

Straits Times, 26 Mar 2020, Good bots, bad bots and the humans in between

Last week, Facebook's anti-spam rule went haywire, wrongly blocking and flagging as spam a slew of legitimate posts and links, including those with news articles related to the coronavirus pandemic.

The malfunction happened a day after Facebook pledged to fight misinformation on the virus and said it would rely more heavily on content moderation that was based on artificial intelligence (AI).

Other tech platforms like Twitter and YouTube had also begun using more AI technologies for faster content moderation.

Although attributed to a bug in Facebook's anti-spam system, the snafu has contributed to a growing debate on whether humans or AI-driven bots are better at countering online disinformation.

On the one hand, automation is needed to tackle the copious content online. But the shortcomings of AI - evidenced by Facebook's erroneous takedowns and even a warning that it would make "more mistakes" with higher automation - mean that human moderators who can better understand nuance are still needed.

The intensity of the debate goes up a notch when bad AI bots threaten to manipulate public discourse and interfere with domestic politics - given the relative ease of spreading disinformation and creating deep fake impersonations on social media - requiring countermeasures to be more sophisticated.

A bot - short for robot - is a software program on the Internet that performs repetitive or automatic tasks, and can imitate a human user's engagement. Some are useful, but some are "bad bots" that can be programmed to perform malicious tasks.

The threat first gained widespread recognition in 2016, when social media bots infamously influenced the Brexit vote and US presidential candidate Donald Trump's narrow election victory.

With the Covid-19 outbreak showing no sign of abating, leading to cancellations of in-person meetings and gatherings, online political campaigning becomes more important than ever, such as in this year's upcoming United States presidential election.

Questions over how candidates will connect with voters in the midst of the outbreak - and manage disinformation - also loom large over Singapore as the nation gears up for its next general election, which must take place by April next year.

In what is increasingly looking like a battle of bots online, what is the role of humans? Is there such a thing as a good bot?

GOOD BOTS NEEDED
While humans are best at understanding and detecting nuance, their work in content moderation on social media seems to have reached a tipping point.

Many moderators who have been engaged in the work for some years report that they are psychologically scarred by the nature of what they see and read, and point to a need for higher automation and better technologies to take over.

At Facebook, for instance, hours on end of viewing violence, bullying and nudity - content that violates the firm's community standards - have reportedly led to addiction and numbness in human moderators.

Some even embraced fringe views owing to the amount of hate speech and fake news they had to view all day for years, The Guardian reported late last year, citing people who worked at the social media firm's moderation centres in Berlin.

The signs of emotional distress: An employee reportedly shopped online for a Taser weapon as he started to feel fearful walking in the streets at night, surrounded by foreigners; a former employee slept with a gun at his side after being traumatised by a video of a stabbing incident.

Plus, manual content moderation cannot keep up with the sheer amount of fake and offensive content generated on social media.

Cyber-security software firm Imperva's Bad Bot Report last year shows that bad bots - including social media and spam bots - are now responsible for a fifth of all Web traffic. And the number is still growing.

Highly automated bot farms in Russia, the Philippines and Pakistan are behind many fictitious online accounts to sway online sentiments. The torrents of deceptive news e-generated can overwhelm any manual filtering.

Global activist group Avaaz reported that US election-related misinformation on Facebook had 86 million estimated views from August to October last year - more than three times the numbers observed during the preceding three months.

The spike in traffic was already happening one year ahead of this year's US presidential election.

Zooming in on how well the top 20 fake news stories engaged with online users, the number of interactions reached 4.6 million from August to October last year - 50 per cent more than the 3.1 million interactions during the months leading up to the 2016 US presidential election.

Some of the most persuasive fake news stories last year analysed by Avaaz include:

  • The false claim that President Trump's grandfather was a pimp and tax evader, and that his father was a member of the Ku Klux Klan, an American white-supremacist hate group whose primary targets are African Americans (estimated view of more than 29 million)
  • The false claim that Ms Nancy Pelosi, the US politician serving as the Speaker of the House of Representatives, had diverted US$2.4 billion (S$3.5 billion) from Social Security to cover the costs of an impeachment investigation into Mr Trump (estimated view of more than 24 million)
  • The false claim that leading Democratic candidate Joe Biden called Mr Trump's supporters the "dregs of society" (estimated view of more than four million).

The speed at which disinformation is being produced calls for an equally expeditious countermeasure.

SMARTER BOTS
Bad bots have also grown smarter, making it harder to tell if an online engagement is from an actual human.

To take on this intelligent army of bad bots, equally advanced AI techniques and behavioural analytics are required.

For one thing, bad bots no longer just generate repetitive text to elevate the prominence of certain topics, but have multiple online accounts talking about the same thing to create an illusion of consensus, a study on Twitter bots released last year by computer scientists at the University of Southern California concluded.

The scientists also spotted bots replying to other posts with appropriate sentiments, and sharing polls with follow-up questions to seem more human.

In a recent example, disinformation campaigns - purportedly Russian-linked - promoted conspiracy theories that the US was behind the coronavirus outbreak which started in Wuhan, China. US officials tasked with combating Russian disinformation said they had found false personae on Twitter, Facebook and Instagram sowing discord and undermining American institutions - in multiple languages.

The coronavirus has, since its detection last December, infected more than 436,000 people worldwide, killing more than 19,600.

NOT ENOUGH DONE
Good bots are already being used - albeit in narrow applications - for identifying and removing content related to terrorism and extremism on social networks.

Experts said these bots should also be more widely deployed to address disinformation in all forms - which social media firms have been falling short on doing.

"If one has access to big data, as social networks do, a good bot can certainly be coded," said engineering product development associate professor Roland Bouffanais from the Singapore University of Technology and Design (SUTD).

SUTD communication and technology professor Lim Sun Sun said that combining the collective intelligence of humans and bots can lead to the desired outcome of faster and more nuanced moderation.

Describing good bots as positive intervention techniques, Professor Lim said: "Automated bots can be trained using big data to identify negative views through various words, terms or other content characteristics."

Once negative or undesirable reports are identified, good bots can immediately slow down their flow and "quarantine" their circulation to a very small portion of the social network, she said.

Human checkers can then be deployed to further examine the content before deciding whether to expunge them, limiting the role of humans to such higher-order tasks.

The measure was first outlined by SUTD's Prof Lim and Associate Professor Bouffanais in their joint article published in the IEEE Technology and Society Magazine last December.

The only obstacle is the cost of implementing such technologies on a large scale.

AI machines have to be trained to understand different subject matter in different languages, and the subtleties of phrases and images in different contexts. For instance, the phrase "men should be sterilised" is offensive according to Facebook's policy, but "I just broke up with my boyfriend, and I hate all men" is not.

Mr Muhammad Faizal Abdul Rahman, research fellow at the Centre of Excellence for National Security, a unit of Nanyang Technological University's S. Rajaratnam School of International Studies, said social media firms should bear such costs as part of social responsibility.

"It would be expensive for social media companies, but it is in their long-term interest as the lack of security and social and political stability will create an environment that is bad for business," he said.

Facebook, in particular, has been heavily criticised for not doing enough to reduce false information on its platform, although it has pledged multiple times to do more, such as by depending more on AI.

The company still relies heavily on tens of thousands of human content moderators - many of whom are lowly paid contract workers - evaluating posts in more than 50 languages round the clock at more than 20 sites globally.

Another glaring failing: It still allows political advertisements containing lies to run on its platform, such as a Trump ad that makes false claims about former vice-president Joe Biden.

Last month, US Senator Michael Bennet sent a letter to Facebook chief executive Mark Zuckerberg calling out Facebook for its inadequate efforts to stop manipulated media campaigns around the world.

LEGISLATION A SILVER BULLET?
Some nations have gone ahead with new legislation, believing that tech companies cannot be left alone to self-regulate.

Germany's Network Enforcement Act (NetzDG) came into full effect on Jan 1, 2018, subjecting online platforms to fines of up to €50 million (S$78 million) if they do not remove "obviously illegal" hate speech and other postings within 24 hours of receiving a notification.

Social media companies - including Facebook, YouTube, Instagram and Snapchat - are required to file reports on the number of complaints received about illegal content on their platforms every six months.

Singapore wants to introduce a new law to curb foreign interference in domestic politics to combat hostile information campaigns online head-on. It could take guidance from Australia, which passed sweeping foreign interference laws in 2018.

Among the key provisions in the Australian regime are bans on foreign political donations and a compulsory registry of all lobbyists representing foreign entities.

Australia has also criminalised covert, deceptive or threatening actions intended to interfere with democratic processes or provide intelligence to overseas governments. The penalties include a jail term of up to 20 years.

Critics are concerned about the contentiousness of such a law and its potential to quash dissent. To legislate or not, and what is to be legislated? It is a tough call. But if social media firms do not show more commitment to putting up a good fight in an imminent bot war, they could be forced by law to do so.

Recognising social media platforms' unparalleled power to shape social norms and political debates, Mr Faizal said: "Legislation can impede the spread and impact of disinformation - that is half the battle won."