Skip to main content

Advertisement

Advertisement

Advisory council on ethical use of AI to be set up

SINGAPORE — To encourage responsible adoption and development of artificial intelligence (AI), an advisory council — headed by former Attorney-General (AG) V K Rajah — will be set up to look into the wide-ranging ethical and legal implications arising from the use of the technologies.

The IMDA defines AI as a set of technologies that seek to imitate human traits and processes such as reasoning, problem solving, perception, learning and planning.

The IMDA defines AI as a set of technologies that seek to imitate human traits and processes such as reasoning, problem solving, perception, learning and planning.

Follow us on Instagram and Tiktok, and join our Telegram channel for the latest updates.

SINGAPORE — To encourage responsible adoption and development of artificial intelligence (AI), an advisory council — headed by former Attorney-General (AG) V K Rajah — will be set up to look into the wide-ranging ethical and legal implications arising from the use of the technologies.

The Advisory Council on the Ethical Use of AI and Data, which was announced by Communications and Information Minister S Iswaran on Tuesday (June 5), will work with the Infocomm Media Development Authority (IMDA).

Mr Rajah stepped down as AG in January last year, after about 2.5 years in office. He is currently also part of a separate 10-member council set up by the Monetary Authority of Singapore looking into the ethical use of AI and data analytics by financial institutions.

Apart from Mr Rajah, the other members of the new advisory council will comprise thought leaders in AI and big data from Singapore and international companies, academics and consumer advocates, the IMDA said.

Details such as its composition and the timeline of its setting up will be announced later.

Experts told TODAY that some of the implications which the council could deliberate over include liability issues. For example, whether the passenger, the autonomous vehicle provider, the car maker or the driver should be held responsible if an autonomous vehicle hits a pedestrian.

It could also study ethical issue in the healthcare sector for instance, said Professor Kevyn Yong, dean of ESSEC Business School Asia Pacific. “For example, in the case where a doctor applies AI but ends up misdiagnosing or wrongly treating a patient, who should be blamed – the doctor or the robot?” he said.

Mr David Lee from the Singapore Institute of Retail Studies said the council could also look into the sale of user data by companies for profit, for instance.

The IMDA defines AI as a set of technologies that seek to imitate human traits and processes such as reasoning, problem solving, perception, learning and planning. These technologies do so by using algorithms and data to train computer systems to learn how to be adept at performing human tasks.

Speaking at the innovfest unbound 2018 technology conference, Mr Iswaran noted that innovative technologies “bring economic and societal benefits, as well as attendant ethical issues”. He added: “Thus, good regulation is needed to enable innovation by building public trust.”

The move to set up the AI advisory council comes almost two decades after the Government established a similar panel to consider ethical implications arising from new technological advancements.

In 2000, the Government set up the Bioethics Advisory Committee to examine ethical, legal and social issues arising from research on human biology and behaviour and its applications.

Part of the job scope of the AI advisory council is to publish guidelines or recommendations to inform the Government on any need for future regulation.

It will be supported by two projects: A research programme by the Singapore Management University (SMU) School of Law, and a framework discussion paper put up on Tuesday by the Personal Data Protection Commission (PDPC).

The PDPC’s paper seeks to propose an “accountability-based framework” which provides common definitions and a structure to “facilitate constructive and systemic discussions on ethical, governance and consumer protection issues relating to the commercial deployment of AI”.

Among other things, it recommends that decisions made by or with the assistance of AI should be explainable, transparent and fair to consumers. AI systems, robots and decisions should also prioritise the benefit of the human user.

The five-year SMU programme is funded by a S$4.5million research grant from the National Research Fund and the IMDA. It will conduct research on policy, ethics, governance and regulatory issues relating to AI and data use. 

Under this programme, a research centre will be set up, housing 10 faculty members based in Singapore as well as several visiting professors and AI experts from universities in America and the United Kingdom. 

The centre will also establish an expert panel later this year, comprising representatives from tech companies and legal professions.

Among other ways, the research programme will collaborate with the advisory council by serving as a body of knowledge to translate ethical problems and discussions into concrete guidelines for corporations or businesses.

Read more of the latest in

Advertisement

Popular

Advertisement

Stay in the know. Anytime. Anywhere.

Subscribe to get daily news updates, insights and must reads delivered straight to your inbox.

By clicking subscribe, I agree for my personal data to be used to send me TODAY newsletters, promotional offers and for research and analysis.