Skip to main content

Advertisement

Advertisement

Facebook will remove false information 'only if it leads to voter suppression, or threat of violence'

SINGAPORE — Fending off criticism by the Singapore Government over its handling of a Facebook post spreading misinformation, the social networking giant on Tuesday (Nov 13) stood by its policies which have to be "very objective and black-and-white".

SINGAPORE — Fending off criticism by the Singapore Government over its handling of a Facebook post spreading misinformation, the social networking giant on Tuesday (Nov 13) stood by its policies which have to be "very objective and black-and-white". 

Under its existing policy, Facebook will remove inaccurate information circulating on its platform only if it leads to voter suppression, or poses a threat of imminent violence. Ms Monika Bickert, vice-president of Facebook’s product policy division, said that its content guidelines need to have “clear lines”, so that its team of 7,500 content reviewers can make a decision on whether to take down a post.

Ms Bickert was in town for the first-ever forum in Asia-Pacific on Facebook’s community standards. At a media session, she responded to questions from reporters asking about Facebook’s decision not to take down a post by States Times Review — an alternative news site — which linked Prime Minister Lee Hsien Loong to the corruption scandal involving Malaysia’s state fund 1MDB.

She stressed the need for Facebook's policies to be equitable, and to produce consistent outcomes across different users in various parts of the world. “We don’t want policies to apply to certain people but not others. We want everybody around the globe to use Facebook and use it safely," she added. 

On Nov 9, Singapore’s Ministry of Law (MinLaw) slammed Facebook for declining to take down the States Times Review article, which it called “false” and “defamatory”.

The Info-communications Media Development Authority (IMDA) said in a statement earlier on the same day that it had asked Facebook to deny access to the offending post.

In its statement, MinLaw said that Facebook “cannot be relied upon to filter falsehoods or protect Singapore from a false information campaign”, and added that Singapore needs legislation on deliberate online falsehoods.

Separately, a Facebook spokesperson responded on Tuesday to media queries on IMDA’s request. “We have a responsibility to handle any government request to restrict alleged misinformation carefully and thoughtfully, consistent with our approach to government requests around the world," she said. “We do not have a policy that prohibits alleged falsehoods, apart from in situations where this content has the potential to contribute to imminent violence or physical harm”.

At the media session, Ms Bickert reiterated that Facebook does not have a wholesale policy of removing false content because it would be extremely hard to police whether a specific piece of information is true or false. Also, there is the question of whether it is appropriate for a private company to determine whether the content is true or not, she said. 

What Facebook does to counter misinformation is to surface relevant information from credible sources, so that these are amplified with more people reading the information, she added. If governments find content that violates their laws, there is still a process for them to submit requests.

Facebook’s content policy team also highlighted three main areas of content that could possibly violate their standards: Hate speech, adult nudity, and dangerous individuals and organisations (terrorism propaganda).

For instance, hate speech is categorised as a direct attack on people based on nine protected characteristics, which include national origin, sexual orientation and caste.

For adult nudity, it prevents the non-consensual sharing of such images and under-aged sexual content.

As for Facebook's policy on dangerous individuals and organisations, the company classifies these as entities that coordinate violence with a political or ideological agenda. It does not allow these terrorists to have a presence on the platform, or allow content that praises, supports or represents terrorists.

Read more of the latest in

Advertisement

Advertisement

Stay in the know. Anytime. Anywhere.

Subscribe to get daily news updates, insights and must reads delivered straight to your inbox.

By clicking subscribe, I agree for my personal data to be used to send me TODAY newsletters, promotional offers and for research and analysis.