Skip to main content

Advertisement

Advertisement

Commentary: The good, bad and unknowns of letting Singapore’s civil servants use ChatGPT

ChatGPT could potentially help civil servants, but users should have their eyes open to the limitations and risks of increasing reliance on AI, say the authors.

ChatGPT could potentially help civil servants, but users should have their eyes open to the limitations and risks of increasing reliance on AI, say the authors.

Follow us on TikTok and Instagram, and join our Telegram channel for the latest updates.

Singapore’s public service has never been afraid of innovation.

Therefore, the move to integrate OpenAI’s Chat Generative Pre-Trained Transformer — ChatGPT — into Microsoft Word for Government users, with the aim being to speed up work and free officers for higher level tasks is unsurprising.

Those involved (not least GovTech’s Open Government Products team) should be commended for their vision.

The potential gains are evident. Enterprises of all sizes increasingly rely on tools that can automate mundane tasks, improve communications, and speed up customer support.

Observers of how governments work would know that officials spend a great deal of time on tasks that can be automated (writing briefs, or processing forms for data collection).

TECHNOLOGY UNTESTED?

ChatGPT could potentially help civil servants by creating drafts of correspondence, reports, speeches, talking points, press releases and social media posts, all with the correct tone, length, and style, based on prompts provided by the users.

But users should have their eyes open to the limitations and risks of increasing reliance on artificial intelligence (AI).

The tool adds on to the users' prompts by filling in details, gathered from scraping sources all over the internet, which have not been verified or checked for accuracy.

The models make up information and can potentially present falsehoods and facts together, without distinguishing between them, in language that looks authoritative and appears confident, especially when the statements do not hedge.

One might call this almost a recognisable ChatGPT cadence.

These tools are designed to produce paragraphs of text in a style that appears human, well-researched, and clear — attributes that mimic persuasive rhetoric.

This leads to the second risk, as it persuades the user to trust and rely on the information presented in the paragraphs. This information may contain inaccurate or even false information, because the sources are not verified.

These are harder to spot when presented alongside true and accurate facts. (And public servants will not necessarily be any wiser as to why it makes certain errors).

There is in theory a cascading pathway where misleading or incorrect information may be fed back into correspondence, talking points, speeches or media releases.

Simple tests by the authors show that ChatGPT will make mistakes, while revealing the types of mistakes it makes.

One of the authors tried to ask ChatGPT to construct a list of extremists detained under the Internal Security Act, but it was unable to and suggested the information was confidential — when in fact it is in the public domain.

ChatGPT also furnishes information which sounds plausible but which is actually wrong.

One author asked ChatGPT recently who Lawrence Wong was.

The answer: He was the Minister for Finance and National Development (omitting the fact that he was Deputy Prime Minister). Then it went on to say he was previously a “senior executive in the banking and finance sector” which is incorrect.

The technology is not yet perfect — and some argue not ready for prime time.

One may expect that these types of errors and inaccuracies may be reduced over time with better training and more up-to-date source material.

But future refinements will not obviate the need for a human being to check and take responsibility for the factual accuracy of AI-generated content.

Given that the intention it seems is to start with the Smart Nation and Digital Government Office before rolling out to other government ministries, it might be useful for the rollout of this early version to be treated as a sandbox, using this opportunity to see what teething issues (which are inevitable) will arise, and seeing how the tool can be refined.

UNINTENDED CONSEQUENCES

The technical wizards at GovTech’s Open Government Products will need to shape the tool, and, if necessary, customise it. But the tool, if not judiciously used, may well end up shaping the public service.

A question that GovTech in consultation with other agencies has almost certainly been mulling: Can public servants excel at higher level tasks if they do not understand the building blocks — like writing briefs?

There can be no shortcuts to acquiring such skills, ChatGPT notwithstanding.

Likewise, senior officers must continue to pass down basic research and analysis skills to young officers in the critical first phase of their careers.

THE FUTURE

It has been made known that a future version of the ChatGPT tool will access information from official databases.

The obvious observation is of course that technical safeguards will need to be put in place to safeguard confidential data.

But on the positive side, those responsible would in time almost certainly contemplate putting the advanced capability to higher-order uses.

One possibility would be the framing of an issue from the perspective of different groups of stakeholders.

It may be possible for ChatGPT to be trained using focus group discussions, surveys, and national conversations, and then be used to generate possible reactions to different policy options.

If used in this way in the future, it is possible that the tool might help public officers identify blind spots in designing policies.

But first, a combination of people, process, and technology will be needed to ensure correct use.

The data must be cleaned, to remove inaccurate, incomplete, obsolete, invalid, and inconsistent data.

This is the least glamorous task in data science, but essential, because faulty data will undermine the value of the output.

And on top of this, people must be trained to critically evaluate all information generated by the system and to be accountable for the output.

The humans in the loop will always matter — and matter more.

ChatGPT is only as good as the information it has been fed. Such tools would not as yet be able to generate breakthrough thinking on policy issues, especially those outside the prevailing paradigm.

Like a mathematician employing a calculator, users must first understand how to perform tasks from first principles, then learn to use tools such as ChatGPT to facilitate and augment the process.

A future generation of public servants yet to make its appearance will be schooled from a young age in an AI-enabled Smart Nation.

They will have grown up with enhanced search engines that incorporate AI functionality — and they will expect this functionality in their workplace.

Once a powerful tool is easily available, people will want to try using it to improve their work and quality of life.

It is better to guide them how to use the tools correctly and safely, and to provide guard rails for the possible impact of its use, than to try to suppress its use, because that leads to “Shadow IT” or surreptitious use, which would pose a risk to the organisation.

Good training, proper procedures and technical safeguards can help individuals and agencies use the tool in the most appropriate way.

This future will eventuate more quickly than many of us think — and the questions should be considered now.

 

ABOUT THE AUTHORS:

Benjamin Ang is Deputy Head of the Centre of Excellence for National Security (CENS) and Head of the Cyber and Homeland Defence Programme, at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University. Terence Ho is an Associate Professor in Practice at the Lee Kuan Yew School of Public Policy. Shashi Jayakumar is Head of CENS and also Executive Coordinator, Future Issues and Technology, RSIS.

Related topics

ChatGPT civil service

Read more of the latest in

Advertisement

Advertisement

Stay in the know. Anytime. Anywhere.

Subscribe to get daily news updates, insights and must reads delivered straight to your inbox.

By clicking subscribe, I agree for my personal data to be used to send me TODAY newsletters, promotional offers and for research and analysis.