Skip to main content

Advertisement

Advertisement

Commentary: Bad bots can cause all kinds of harm online. Here's how to protect yourself

In the month of April alone, an artificially-generated Drake song went viral on TikTok, an AI image won a global photography prize, and Singaporeans were bombarded with false WhatsApp claims of a new deadly Covid-19 outbreak.

In a world of social media, instant messaging and online virality, Singaporeans have unfettered access to information that can be consumed and shared in a matter of seconds.

In a world of social media, instant messaging and online virality, Singaporeans have unfettered access to information that can be consumed and shared in a matter of seconds.

Follow TODAY on WhatsApp

In the month of April alone, an artificially-generated Drake song went viral on TikTok, an artificial intelligence (AI) image won a global photography prize, and Singaporeans were bombarded with false WhatsApp claims of a new deadly Covid-19 outbreak.

In a world of social media, instant messaging and online virality, Singaporeans have unfettered access to information that can be consumed and shared in a matter of seconds.

But this world also comes with scams, fake accounts and AI-created internet bots. And these now pose a dangerous threat in spreading misinformation and propaganda.

Just as an AI image can convincingly appear as photographic art, internet scammers can manipulate young Singaporeans into sharing fake news, their data and their money.

Indeed Millennials and Generation Z were found to encompass over half of Singaporean scam victims last year. And only recently, it was reported that 137 people collectively lost S$170,000 to fake concert ticket scams.

While humans have played a significant role in the dissemination of online fraud, the rise of automated accounts known as bots has added a new layer of complexity to the issue.

Today it’s estimated that 10 per cent of daily active Twitter users are bots. TikTok, meanwhile, reported that it removed 53 million fake accounts between January and June 2022.

With the rise of generative AI, mostly notably ChatGPT, the prevalence of bots, fake news and scams is only going to increase in Southeast Asia.

Unless individuals and organisations become aware of dangers lurking online, then their personal and financial information remain at risk.

ABUSE AND MISUSE

Last year it was revealed that bad bots account for over a quarter of website traffic in Asia Pacific, with Singapore taking the highest proportion of fake traffic in the region at 39 per cent.

The prevalence of bad bots in Southeast Asia is unsurprising. Almost 70 per cent of Southeast Asia's population actively use social media, with young adults between the ages of 16 and 24 spending over 10 hours a day.

In addition, the region is the world’s seventh-largest market with a combined gross domestic product of more than US$2.7 trillion (S$3.6 trillion). As a result, the region has become a prime target for cyber attacks.

For young Singaporeans who are dealing with skyrocketing property prices and a competitive jobs market, it’s easy to dismiss bots and associated scams as, at worst, an online nuisance.

But across every crevice in the internet, bots are lurking and posing an ominous threat.

They enable high-speed abuse, misuse and attacks on websites, mobile apps and Application Programming Interfaces. Successful attacks can result in the theft of personal information, credit card data and loyalty points.

On social media, bots can infiltrate groups of people and be used to propagate specific ideas.

Since there is no strict regulation governing their activity, social bots play a major role in creating fake news and influencing online public opinion, thereby posing a risk to many young adults.

Although social media networks have clamped down on them, bots can still create fake accounts, amplify their operator’s message and generate fake followers and likes.

Even today, it is difficult to identify and mitigate social bots because they mimic the behaviour of real users with moderators remaining none the wiser.

Deep fakes, such as a pro-China influence campaign that used AI avatars, have been used for political purposes and are becoming increasingly sophisticated. AI bots also have sparked a surge in phishing campaigns, the most popular scammer tactic.

This sees cyber criminals steal data or money using fake emails or mobile messages. These can even be personalised into various languages using ChatGPT.

On numerous occasions, these messages have contained ransomware malware, which can pose potentially devastating consequences for both organisations and individuals.

WHAT CAN USERS DO?

So what can users do to protect themselves and their data?

First, awareness is key.

For individuals, the golden rule is that if an email or message looks suspicious, report it and/or move it straight to the spam box. Do not click on any links attached under any circumstances.

This applies even if a message purports to be from a familiar business, such as a bank or telecommunications provider. Many companies are increasingly facing compromised emails and branding as part of scammers’ phishing efforts.

If ever in doubt, call the company’s switchboard before following through with any email instructions.

It is also important to use different passwords on different sites in case of any company data breaches, as seen with Singtel subsidiary Optus last year.

Last but not least, ignore any spam bots or social media posts with any offers stating “free giveaway” or “free followers”. If a post seems too good to be true, it probably is.

From an organisational standpoint, leaders need to ensure their cyber security technology is up-to-date.

Businesses that see any unusual traffic spikes may have been targeted by bots, thereby making it imperative to constantly check and verify sources.

IT teams must ensure all Captcha outdated user browsers are blocked, as well as unknown hosting providers and proxy services.

Finally, businesses may want to investigate technology providers that specialise in bot mitigation using AI.

The accessibility of AI and machine learning will enable online scammers and fraudsters to conduct more advanced attacks.

Even now, the volume and sophistication of bots make the problem beyond the means and resources of most organisations’ IT teams.

If scammers are using AI tools to commit fraud, then fight fire with fire.

ABOUT THE AUTHOR:

Patricia Freijo is vice-president of Customer Growth at TrafficGuard, a global advertising verification company that provides fraud protection for brands, apps and ad networks.

Related topics

artificial intellignce machine learning scam online harm

Read more of the latest in

Advertisement

Advertisement

Stay in the know. Anytime. Anywhere.

Subscribe to our newsletter for the top features, insights and must reads delivered straight to your inbox.

By clicking subscribe, I agree for my personal data to be used to send me TODAY newsletters, promotional offers and for research and analysis.