Skip to main content

Advertisement

Advertisement

Explainer: Why are tech experts warning about the danger of AI's growth and what can be done about it?

SINGAPORE — Artificial Intelligence (AI) pioneer Geoffrey Hinton announced this week that he has quit his job at technology company Google after more than a decade and that part of him regrets his life's work.  

Explainer: Why are tech experts warning about the danger of AI's growth and what can be done about it?
Follow TODAY on WhatsApp
  • Artificial intelligence (AI) pioneer Geoffrey Hinton announced that he has quit his job at Google after more than a decade
  • He is part of a growing number of critics warning of the dangers of AI's rapid growth and calling for a temporary halt in development
  • These tech experts warn of an all-out race to develop and deploy ever more powerful AI that no one can understand or control 
  • They are also calling for the development of shared safety protocols for AI development 

SINGAPORE — Artificial Intelligence (AI) pioneer Geoffrey Hinton announced this week that he has quit his job at technology company Google after more than a decade and that part of him regrets his life's work.  

His research on neural networks — a set of algorithms that teaches computers how to process data in a way similar to the human brain — and deep learning has served as the foundation for current AI systems such as ChatGPT. 

In an interview with British broadcaster BBC, Dr Hinton said that some of the dangers of AI chatbots, which generate human-like text based on user prompts and evolves its responses over time, were “quite scary”. 

He is not the only person who has openly shared such worries, with more than 27,000 people including technology leaders and pundits signing an open letter on the website of Future of Life Institute calling for a six-month pause on the development of all AI systems more advanced than ChatGPT. 

Future of Life Institute is a non-profit organisation that focuses on transformative technology and its impact on prospects for life. 

Among those who had signed the letter were Mr Yoshua Bengio and Mr Yann Lecun. They, along with Dr Hinton, had won the 2018 Turing Award — dubbed the Nobel Prize of computing — for their work on deep learning.

Tech industry leaders such as Twitter chief Elon Musk and Apple co-founder Steve Wozniak were also among the signatories. In 2015, Musk co-founded OpenAI, a research startup that created ChatGPT, but stepped down from the company's board in 2018.

The letter came after OpenAI launched the latest version of ChatGPT in March this year, called GPT-4. 

TODAY looks at what experts are worried about and what they predict will happen if such rapidly advancing technology is left unchecked. 

WHAT IS EVERYONE CONCERNED ABOUT? 

After the launch of ChatGPT, other tech companies such as Google and Meta scrambled to keep up, with Google releasing ChatGPT rival Bard just a few weeks after that. 

Meta's chief executive officer Mark Zuckerberg also announced during the company's recent quarterly earnings that his aim for the company, which owns social media platforms Facebook and Instagram among others, is to become a leader in generative AI. 

This rapid push by tech companies was what prompted the open letter, which warned that advanced AI could represent a “profound change" in the history of life on Earth and should be planned for and "managed with commensurate care and resources”. 

“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” the letter read. 

It called for all AI laboratories and independent experts to use the pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are audited and overseen by independent experts. 

Google's chief Sundar Pichai echoed these sentiments in an interview with American news outlet CBS News, saying that concerns about AI keeps him up at night. 

“It can be very harmful if deployed wrongly and we don’t have all the answers there yet — and the technology is moving fast,” Mr Pichai said. 

Speaking to TODAY, Professor Bo An from the School of Computer Science and Engineering at Nanyang Technological University (NTU) acknowledged that there is "no doubt" AI can make the world better. 

"AI has transformed many industries and pretty much everybody has experienced the benefits of AI," he said. 

However, AI technology is not mature enough and there are many concerns such as bias in AI models, data privacy, ethical issues, robustness and security, he added.

Prof An is also the co-director of the Artificial Intelligence Research Institute at NTU.  

WHAT COULD A LACK OF CHECKS LEAD TO? 

Mr Pichai said that AI could cause harm through its ability to produce disinformation by making fake videos where a person is recorded saying or doing something he never did. 

This was also one of the concerns which Prof An raised, saying that AI systems can easily generate untruthful, biased and toxic information. 

"I feel that the biggest concern is that AI technology might be maliciously used, which might lead to harmful outcomes and even catastrophic consequences," he added. 

Fake photos of former United States president Donald Trump’s arrest and of Pope Francis wearing a puffer jacket are just a few of such AI-generated content going around the internet that look deceptively real. 

Systems such as GPT-4 get facts wrong and make up information, a phenomenon called “hallucination”, The New York Times reported. 

News agency Bloomberg reported that news-rating group Newsguard found 49 purported news websites generated by AI chatbots proliferating online. 

The pseudo news sites had used AI to generate fake news articles such as "Biden dead, Harris acting President, address 9am" and a story about the deaths of thousands of soldiers in the Russia-Ukraine war. 

ChatGPT had also wrongly accused US law professor Jonathan Turley of sexual harassment.

In an article posted on his blog, Dr Turley from George Washington University said that a fellow law professor had done research on ChatGPT about sexual harassment by professors. 

ChatGPT then reported that Dr Turley had been accused of sexual harassment in a supposed 2018 Washington Post article after groping law students on a trip to Alaska — a claim that was untrue. 

Dr Hinton, the AI "godfather", had also expressed worries that AI technologies will eventually upend the job market, replacing human workers such as translators, personal assistants and others who handle rote work. 

Ms Anu Madgavkar, who leads labour market research at the McKinsey Global Institute, told British newspaper Guardian that in the past, it was factory jobs that were largely affected by automation. Now, it is white-collar jobs that will be most affected by AI. 

“It’s increasingly going into office-based work and customer service and sales,” she said. 

A paper written by OpenAI researchers estimated that 80 per cent of the US workforce could have at least 10 per cent of their work tasks affected by large language models and that 19 per cent of workers might see at least 50 per cent of their tasks affected.

Large language models are neural networks that learn from huge amounts of digital text. 

In its Future of Jobs report released on April 30, the World Economic Forum said that the growth of AI will disrupt many jobs and put others at risk. It predicted that there could be 26 million fewer record-keeping and administrative jobs by 2027.

This came after a report in March by investment bank Goldman Sachs found that AI could replace 300 million full-time jobs, replacing a quarter of work tasks in the US and Europe. 

However, this would mean widening income equality. 

Economists told Guardian that this could widen the "already huge income and wealth inequality" in the US by creating a new wave of tech billionaires while pushing many workers out of better paid jobs. 

WHAT SHOULD BE DONE GOING FORWARD 

In his interview with CBS News, Mr Pichai spoke about the importance of society adapting quickly with regulations for AI in the economy, as well as laws to punish abuse and treaties among nations to make AI safe. 

"These are deep questions... How do you develop AI systems that are aligned to human values — and including — morality?", he said, adding that the development of this needs to include not just engineers, but social scientists, ethicists and philosophers.

“I think we have to be very thoughtful. And I think these are all things society needs to figure out as we move along. It’s not for a company to decide.” 

In Singapore's context, Prof An said that the country should continue to invest in AI research and development. 

He added that the Government, tech industry and academia should also collaborate to launch regulations to provide good guidelines on AI in terms of data privacy, security and ethics. 

"The Government should also launch sector-specific regulations, such as finance and healthcare, to push AI forward in these specific areas."  

In Singapore, the Advisory Council on the Ethical Use of AI and Data was established in 2018 to advise the Government on ethical, policy and governance issues arising from the use of data-driven technologies in the private sector, among other things. 

Chaired by former Attorney-General VK Rajah, the council consists of 13 members including Mr Piyush Gupta, CEO of DBS bank, and Mr Chia Song Hwee, deputy chief executive officer of Temasek International, the wholly owned management and investment arm of state investment firm Temasek Holdings.

TODAY has reached out to the Personal Data Protection Commission for comments on AI technologies and if more regulations are needed. 

Another concern flagged by experts is the "existential risk" which AI poses.

If future AI gains the ability to improve itself without human guidance, it could potentially wipe out humanity, experts told Time magazine. 

Though Prof An acknowledged that it is a valid concern, he said that current AI technologies are far from achieving such abilities. 

"We should try our best to develop AI technology to help human beings while avoiding it being maliciously used."

Related topics

artificial intellignce Google ChatGPT

Read more of the latest in

Advertisement

Advertisement

Stay in the know. Anytime. Anywhere.

Subscribe to get daily news updates, insights and must reads delivered straight to your inbox.

By clicking subscribe, I agree for my personal data to be used to send me TODAY newsletters, promotional offers and for research and analysis.