Online safety and social media liability: Has the tide turned?

04 Oct 2022

Straits Times, 4 Oct 2022, Online safety and social media liability: Has the tide turned?
 
Last week, a United Kingdom coroner's court ruled that British teen Molly Russell died in 2017 "from an act of self-harm while suffering from depression and the negative effects of online content". The coroner determined that the social media content contributed to her death in a "more than minimal way".

The court heard that on Instagram, the 14-year-old had saved, liked or shared over 16,000 Instagram posts including photos and videos romanticising self-harm and suicide. Some 2,000 were found to be sad, gloomy and depressing images. Pinterest had prompted her with e-mails recommending content such as "10 depression pins you might like".

Her father is determined to ensure that she did not die in vain and has called out social media platforms for their negligence and culpability. Hopefully, this verdict will be a crucial turning point on the liability of tech companies for online safety, especially for teens and children.

Prior to this landmark ruling, tech companies were legally protected from most lawsuits for the content their users post by legislation, such as Section 230 of the Communications Decency Act in the United States. Section 230 states that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" and thus provides immunity for tech companies with respect to third party content.

The landmark ruling in the UK sets the precedent for shattering this legal shield and makes tech companies liable for their algorithmic design and commercial decisions.

The response and action needed from the social media services is clear. Tech companies need to put in place stronger systems and processes to safeguard vulnerable users, such as teens and children, against harmful content such as suicide and self-harm.

Two measures in Singapore
Indeed, some of the tech companies have taken progressive steps in this direction. For example, Instagram updated its policies in 2020 to ban self-harm and suicide posts. More safeguards such as better age verification systems need to be put in place to prevent harmful online content from having devastating real-world effects.

In Singapore, the Government on Monday tabled measures to enhance online safety. One will empower the Infocomm Media Development Authority to issue orders to block or take down egregious content such as posts advocating suicide or inciting racial and religious disharmony in the event that it is accessed by local users on social media platforms. Platforms with "significant reach or impact" in Singapore may also be required to comply with a code of practice on online safety.

This legislation is a sign of the times, and the subsequent enforcement of these measures will test the resolve of the Government and the tech companies to keep the Internet safer for users in Singapore.

Raising the bar on transparency, empowerment
While the UK court ruling effectively assigns partial responsibility to tech companies, where does that leave individual responsibility for the content we choose to consume?

Media history is replete with examples of how society has always blamed the profiteering nature of media companies for society's ills.

The penny press - cheap, tabloid-style newspapers mass-produced in the United States from the 1830s - and later the yellow journalism era at the turn of the 20th century were condemned for stoking moral degradation by reporting sensationalised gossip as news.

Notably, in 1938, a radio adaptation of H.G. Wells' The War Of The Worlds, which describes a Martian invasion, was blamed for causing a nationwide hysteria. And well before the "A4 waist challenge" of 2016 that triggered backlash for creating adverse body image issues, the 1990s television series Baywatch was accused of fanning an unhealthy demand for breast augmentation.

But what sets these previous media panics apart from the current anxieties about social media is the distinct power asymmetry between consumers and the tech companies that own these platforms. Armed with vast troves of data on consumer habits and preferences, social media platforms can calibrate their algorithms to push content that resonates with us, at strategic times of the day, and in a fashion that is designed to seize our undivided attention. Mass broadcast and print media had none of these advantages.

In an era of personalised media, consumers must be forewarned to be forearmed. Therein lies the responsibility of technology companies to be more transparent about their tactics for engaging consumers over their platforms. In the same way that consumers can be trained to recognise advertising that is calculated to mislead, they must also be equipped to identify how social media content is designed to appeal and driven to engage.

Technology companies must do more to educate and empower consumers in tangible ways that go beyond the platitudes of corporate social responsibility. To accelerate and concretise such efforts, we need to put in place appropriate legal structures requiring algorithmic transparency and public education by tech companies.

Consumers, of course, must take ownership of their own media consumption by educating ourselves on how social media algorithms work, and adopt tools and practices to moderate our online habits.

Molly Russell did not know better because she was unaware of everything she was up against when she ventured into Instagram and Pinterest. Ensuring that all media consumers know better than her is a grave task that society must act upon, and urgently.
 

  • Lim Sun Sun is professor of communication and technology at the Singapore University of Technology and Design and author of Transcendent Parenting: Raising Children In The Digital Age (2020).

  • Chew Han Ei is senior research fellow at the Institute of Policy Studies, National University of Singapore.