Skip to main content

Advertisement

Advertisement

Social media platforms that perpetuate disinformation should be penalised, say German researchers

SINGAPORE — Individuals and technology companies which distribute online falsehoods must be held accountable, argued a German academic who has called on social media platforms like Facebook and Twitter to be more transparent about the algorithms at the heart of modern news consumption habits.

Morteza Shahrezaye, political data science researcher at Technical University of Munich, at the public hearings of the Select Committee on Deliberate Online Falsehoods, on March 16, 2018. Photo Credit: Ministry of Communications and Information.

Morteza Shahrezaye, political data science researcher at Technical University of Munich, at the public hearings of the Select Committee on Deliberate Online Falsehoods, on March 16, 2018. Photo Credit: Ministry of Communications and Information.

Follow TODAY on WhatsApp

SINGAPORE — Individuals and technology companies which distribute online falsehoods must be held accountable, argued a German academic who has called on social media platforms like Facebook and Twitter to be more transparent about the algorithms at the heart of modern news consumption habits.

These companies should also explain whether their algorithms are designed to make certain users more prominent than others, and if their systems can pick out fake accounts, known as bots, said Mr Morteza Shahrezaye, a researcher from the Technical University of Munich, on Friday (March 16).

“(When we say) that users and social media platforms should be responsible for the spread of news, the point is we think the users should get educated … And the companies could be more responsible and clearer about the goal of their algorithms, and they should explain (these) to the Government and to the public,” the academic told the third day of the public hearing held by a Select Committee studying how Singapore can tackle the spread of deliberate online falsehoods.

Mr Shahrezaye, in his written submission with another academic, had also suggested that it would not be enough for social media platforms, whose core business revolves around the distribution of content, to react merely by taking down posts when there are complaints by third parties.

They argued that this approach is “weak” because the onus of flagging questionable content is on the users. It could also be abused, such as when political opponents systematically flag social media posts they do not agree with in the hope that they will be removed.

Instead, these platforms should be held accountable and penalised, such as through fines, for allowing “illegal content” to persist in cyberspace, they argued.

In response, Law and Home Affairs Minister K Shanmugam said Mr Shahrezaye had identified an “age-old question” of commercial versus public interests.

“This is a classic case where their commercial interest may conflict with what is in the public interest,” the minister added. “And we will then have to decide what is in the public interest and see how their commercial interest and the public interest can be coincided… That is the task of every government.”

In his written submission with Dr Simon Hegelich, a political data science professor at the Technical University of Munich, Mr Shahrezaye also described the use of social media platforms for political communication as “an enormous misfit”.

The two researchers wrote: “It just takes a click to offer a friendship, to like a post, or show your support (on social media). The whole communication is guided by private affinity and emotions.

“But political discourses should not be convenient. In democracies, politics should be the result of debates, which are often arduous.”

However, the researchers acknowledged that the growing use of social media for political communication will not go away, and is in fact growing.

“Either we learn how to use these platforms in a way that fits better to what we are used to as political debating culture, or this culture will fundamentally change,” the two researchers added.

Earlier in the day, Ms Myla Pilao, director of data security firm Trend Micro, argued that while technology has enabled falsehoods to be spread at very low cost, it can also form part of the solution.

Artificial intelligence and machine learning can be deployed against online falsehoods, for instance, by programming computers to watch out for certain types of content, such as hyperbolic language, misspelt words and sensational headlines.

Online tools such as Fakebox can analyse an article’s title, content and the domain on which it is hosted, in order to give an assessment on how trustworthy the story is.

Cautioning against outright blocking of websites, Trend Micro noted in its 35-page report to the committee that this may cause site owners to feel “oppressed”, leading to new problems rather than solutions.

Raising questions about the necessity and effectiveness of introducing legislation against the problem of online falsehoods, Trend Micro asked: “Though legislation helps, is it really necessary? Is encouraging a social media clampdown enough? Can technology partners be asked to help?”

It added: “Whatever path a government takes will depend on several factors, including but not limited to how effective legislature is in curtailing crime and similar acts from a historical perspective, how willing the platform and technology owners are to cooperate with the government, and what the nation’s current social and cultural realities are.”

Read more of the latest in

Advertisement

Advertisement

Stay in the know. Anytime. Anywhere.

Subscribe to our newsletter for the top features, insights and must reads delivered straight to your inbox.

By clicking subscribe, I agree for my personal data to be used to send me TODAY newsletters, promotional offers and for research and analysis.