Skip to main content

Advertisement

Advertisement

Necessary but tough to get tech companies to thwart hate speech and fake news

Social media companies like Facebook, Twitter and YouTube have increasingly come under fire for facilitating hate speech and fake news. However, the inherent nature of social networks provides no easy answers that can satisfy the interests of companies, governments and the general public alike.

A campaigner from a political pressure group wears an oversized mask of Facebook founder Mark Zuckerberg after he failed to attend a meeting on fake news held by a British Parliamentary committee in November.

A campaigner from a political pressure group wears an oversized mask of Facebook founder Mark Zuckerberg after he failed to attend a meeting on fake news held by a British Parliamentary committee in November.

Follow TODAY on WhatsApp

A recent international hearing on fake news and disinformation held in London reinvigorated interest in Facebook Inc's role in the Sri Lanka riots of July 2018 that facilitated anti-Muslim violence.

Social media companies like Facebook, Twitter and YouTube have increasingly come under fire by governments for their disruption of the democratic process and civil order as well as facilitation of hate speech and fake news which are often times intertwined.

The sheer scale of network these companies provide have resulted in the amplification of hate in multiple instances.

Apart from the riot in Sri Lanka, Facebook posts have stoked various communal tensions in parts of India.

This includes the 2017 week-long riot in the state of West Bengal that resulted in the death of one person and damage to its socio-religious fabric.

Similarly, in Myanmar, fake posts on Facebook sparked major anti-Muslim riots after there were shared by the extremist monk Wirathu in 2014.

Indeed, the 2018 Global Terrorism Index revealed that right wing terrorism is resurging and social media continues to play a substantial role in energising these extremist groups.

Tech company officials have been scrambling to deal with these issues by revising existing guidelines continuously and using artificial intelligence to recognise hate speech content.

While these are positive developments, a few troubling issues bear scrutiny.

First, companies like Facebook and Google are still struggling to deal with the various definitions of terrorism, extremism and hate speech.

This is a problem also prevalent across both the policy and the academic community.

Indeed, the United Nations has no agreed upon definition of terrorism and (now) extremism.

Second, this issue is exacerbated when one considers the sheer amount of content being produced in vernacular languages, of which Asia boasts more than 100.

In Asia, mobile data costs are declining rapidly and the number of mobile internet users are both increasing and diversifying in their languages.

In the next few years, this will create more problems given how hate speech has no linguistic barrier.

Third, it is also important to note that tech companies are still learning to use the tools that may be instrumental in combating these issues.

Artificial intelligence solutions that remove hate speech content require large amount of data to work.

This data, especially in vernacular content, can only accumulate after various contents are already uploaded, during the course of which many hate speech incidents may likely arise.

Moreover, it is not easy for AI to make judgement calls on the ethics and morality of posts. Nor can AI easily counter innovative hate speech methodologies.

Technical limitations aside, a pressing concern that hampers all the efforts to prevent misuse of social media platforms is the continued reluctance of firms to implement costly content regulation mechanisms.

Indeed, a recent New York Times investigation revealed that executives at Facebook used lobbying tactics to reduce the heat on them instead of taking steps to contain interference in elections and abuse of user data.

Similarly, YouTube only started taking down extremist contents after threats by advertisers to stop using the platform.

In other words, the potential loss of profits provided a larger impetus to remove contents than actual violent propaganda proliferating online.

This was further demonstrated in the British government's release of email exchanges between Facebook officials approving aggressive business strategies to maximise profits.

Silicon Valley firms and more specifically "Big Tech Firms" have almost been viewed as a messianic force in the modern world, helping empower all classes of people.

Consequently, tech companies have enjoyed an unregulated atmosphere with little or no political interference in their growth strategies hitherto.

Dealing with the issue of hate speech and tech companies requires a multi-pronged approach.

Social media companies should collaborate with local fact checking websites to remove hate speech and fake news much faster.

AltNews in India for instance, is a good example that has debunked fake news originating from both right and left wing groups.  

For consumers, apart from conducting their own research, they have to lobby advertising companies to remove advertisements from such sites to reduce the prevalence of hate speech and fake news.

As noted, this has proven effective and will continue to be a useful strategy to pressurise tech companies.

Government regulations on dealing with hate speech can also emulate models like Germany which places hefty fines on social media companies for the non-removal of hate speech within 24 hours.

Germany's strict laws have also ensured that issues like holocaust denial are not promoted on social media.

This is not the case in places like the US, suggesting that content regulation can work if governments step in.

Finally, Tim Wu, the tech academic who coined the term net neutrality, advocates for the US government to use antitrust laws to break up the monopolies that tech giants enjoy.

This would increase the number of search algorithms that fake news propagators have to manipulate, thereby impeding their work (among many other benefits).

This, however, is a complicated process that requires immense political will and knowledge of both legal and technical issues.

No one can deny that social networks are indeed a great democratising force that holds potential to link people and causes.

However, the inherent nature of social networks provides no easy answers that can satisfy the interests of companies, governments and general public alike.

In this regard, implementing any change is a long-term process and cannot be achieved overnight.

 

ABOUT THE AUTHOR:

Mohammed Sinan Siyech is a Research Analyst at the International Centre for Political Violence & Terrorism Research, a unit of the S.Rajaratnam School of International Studies.

Read more of the latest in

Advertisement

Advertisement

Stay in the know. Anytime. Anywhere.

Subscribe to get daily news updates, insights and must reads delivered straight to your inbox.

By clicking subscribe, I agree for my personal data to be used to send me TODAY newsletters, promotional offers and for research and analysis.