Skip to main content

Advertisement

Advertisement

Commentary: Singapore’s move to regulate online safety faces challenges, but society can help

Singapore is mulling two proposed codes of practice to promote online safety for users, following a global trend of authorities looking into legislation to tackle the online scourge. But challenges remain and society needs to come together to deal with the problem of online harm.

 

The court heard that Koh Liang Ming posted lewd comments about the woman, as well as her contact details, after she rejected him romantically.

The court heard that Koh Liang Ming posted lewd comments about the woman, as well as her contact details, after she rejected him romantically.

Follow TODAY on WhatsApp

Singapore is mulling two proposed codes of practice to promote online safety for users, both announced last week by the Ministry of Communications and Information (MCI).

The Code of Practice for Online Safety aims to address processes that are in place in social media services that have been identified as high reach or high risk to protect users, and the Content Code for Social Media Services will grant the Infocomm Media Development Authority (IMDA) powers to request social media services to disable harmful content or accounts.

The two codes are expected to work with each other to introduce greater protections for users online, and the announcement by MCI follows a global trend of authorities looking into legislation amidst escalating concerns about online harm.

For instance, the European Union’s (EU) proposed Digital Services Act, when passed, would require online platforms to introduce practices to identify risks, undergo independent audits, and share data with authorities and researchers.

Singapore’s move is reflective of concerns about online safety especially for groups that are at-risk. As social media becomes an integral part of everyday lives even for minors, cases of harmful content and abuse have grown at an alarming rate.

You might have heard of the case of the “Nth Room” in South Korea, where several individuals such as Cho Ju-bin used Telegram from 2018 to 2020 to blackmail, distribute and sell sexually explicit content of more than 100 women and girls.

Similarly, in Singapore, a Telegram chat group SG Nasi Lemak was created in 2018 as a platform for users to share obscene videos and photos of women and girls.

In 2020, TikTok users reported distress and trauma after watching a video of a man committing suicide on the popular video-streaming service. These are but a handful of instances of online harm that have emerged in recent years.

The two proposed codes are distinct from earlier online legislation such as the Protection from Online Falsehoods and Manipulation Act (Pofma) and Protection from Harassment Act (POHA).

While Pofma is aimed at countering falsehoods, POHA provides protections against harassment or stalking which is more relevant to the issue of online harm.

However, POHA itself is insufficient to cover the scenarios associated with online harm. For instance, individuals may not be aware that explicit content about them is being distributed to be able to file complaints under POHA.

POHA is also meant for survivors of harassment, and is not meant to engage social media platforms.

While social media platforms such as TikTok and Meta have issued statements welcoming the codes, working with authorities imply that the platforms will need to collect or synthesise data in order to compile the required reports, and are expected to have greater accountability for harm that emerges on their platforms.

Greater monitoring and sharing of data can be seen as antithetical to concerns about privacy which have also been growing amongst end-users, who may not always understand the extent to which personal data is anonymised.

Developing frameworks and guidelines that can be used to guide platforms’ practices associated with users’ data is crucial, especially for vulnerable persons such as minors.

For example, exemptions will need to be defined while outlining the data rights of individuals, and clarity will also be needed in terms of how data are stored and/or disposed of in different scenarios of online harm.

Looking ahead, it is important to mention that instant messaging applications such as WhatsApp and Telegram are not currently covered by the codes, perhaps for technical reasons.

Instant messaging platforms such as WhatsApp use end-to-end encryption and self-destructing messages, which means that messages can be accessed only by the sender and the recipient(s), and/or will disappear after some time.

Yet, such features have been observed to fuel past cases of online harm, as with the case of the Nth Room and SG Nasi Lemak. Questions about how to mitigate the role of instant messaging platforms in perpetuating online harms remain.

Another challenge has to do with user-related practices.

In my own research, I have also spoken to many parents who lament that they do not know how to manage their children’s use of social media, largely because as non-users, they do not always understand the features associated with different social media platforms.

Children and youths who are well-versed with technology can also find ways to circumvent content moderation settings and safeguards.

The issues allude to the point that the codes alone are inadequate, as instant messaging platforms may be out of reach as a result of the very ways they are designed, and tech-savvy minors may find ways to get around safeguards that are in place.

But to see the codes alone as the solution to address online harm would be a missed opportunity. Social media is now an integral part of our lives, and we need to come together as a society to deal with the problem of online harm.

This means that as parents, we need to engage and have dialogue with our children and each other about their use of online platforms.

As educators, we can do more to move beyond digital literacy towards guiding our students as responsible digital citizens.

As fellow online users, we can look out for each other. As a society, we will need to contribute our collective voices about what we define as online safety, and the terms of engagement.

Online harm is an issue of our time, and society is better off when we can all see and play our part in shaping it.

 

ABOUT THE AUTHOR:

Dr Natalie Pang is a senior lecturer and deputy head at the Department of Communications and New Media, and principal investigator at the Centre for Trusted Internet and Community, both at the National University of Singapore. She is also a member of the research workstream of the Sunlight Alliance for Action to tackle online harms, especially those targeted at women and girls.

Related topics

Digital technology MCI online safety IMDA Social Media harmful content

Read more of the latest in

Advertisement

Advertisement

Stay in the know. Anytime. Anywhere.

Subscribe to get daily news updates, insights and must reads delivered straight to your inbox.

By clicking subscribe, I agree for my personal data to be used to send me TODAY newsletters, promotional offers and for research and analysis.