Stopping comms during a terror attack could be a game-changer
Singapore recently passed the Public Order and Safety (Special Powers) Act (POSSPA), which empowers the police to deal more effectively with evolving and emerging security threats. Much interest and debate has been generated over the Communications Stop Order (CSO) provision, which essentially allows the police to prohibit the public in the incident area from “making or communicating films or pictures…and…text or audio messages about…ongoing security operations”. Apart from operational considerations, the CSO should also be seen broadly as a way to improve risk and crisis management during major security incidents.
Singapore recently passed the Public Order and Safety (Special Powers) Act (POSSPA), which empowers the police to deal more effectively with evolving and emerging security threats. The new legislation is essentially an update of the Public Order (Preservation) Act.
Much interest and debate has been generated over the Communications Stop Order (CSO) provision, which essentially allows the police to prohibit the public in the incident area from “making or communicating films or pictures…and…text or audio messages about…ongoing security operations”.
There are safeguards to prevent potential abuse and ensure accountability as the CSO does not “automatically come into force” and must be “specifically unlocked” by the Police Commissioner “as and when deemed necessary” - but only after the Minister for Home Affairs has activated POSSPA.
The CSO aims to deny terrorists access to information, as this could compromise ongoing security operations, and otherwise affect future ones.
The Ministry of Home Affairs deemed this critical as getting access to such information could endanger the lives of security personnel and members of the public. One example cited was the 2008 Mumbai attacks. Then, live coverage of ongoing security operations enabled terrorists to anticipate and outmanoeuvre security forces.
Another concern is that videos and other media of police operations and tactics, such as their response times and movements, could be studied by those bent on wreaking havoc and be used to plan for future attacks. Apart from operational considerations, the CSO should also be seen broadly as a way to improve risk and crisis management during major security incidents.
During any major crises, especially a major security incident like a terror attack, demand for information will expectedly be high. People would want to find out if their loved ones are affected/involved, and if they are, whether they are safe.
People directly affected would want to know how to get to safety. Members of the public would also want to help, by providing information they think may be of use to law enforcement officers. There will thus be immense pressure on agencies to put out information quickly.
During an unfolding crisis, information may not be available, and even when it does become available, it needs to be verified before being released. When the demand for speedy information is not met, it is natural for some individuals to try and fill the void by pushing out a story or social media post that may be factually inaccurate, or worse, incorrect.
As many instances have shown, much of what has been posted by individuals, well-meaning or otherwise, has not been verified. Since smartphones allow individuals to be both information creators and broadcasters, misinformation will inevitably be generated and widely disseminated.
Putting out such misinformation can adversely impact the management of an ongoing crisis.
First, it could spark smaller crises that would require division of resources and distract officers from dealing with the main incident.
A 2011 United States Congressional Research Service report titled Social Media and Disasters: Current Uses, Future Options, and Policy Considerations, warned: “Some individuals or organisations might intentionally provide inaccurate information to confuse, disrupt, or otherwise thwart response efforts…One tactic that has been used by terrorists involves the use of a secondary attack after an initial attack to kill and injure first responders. Social media could be used as a tool for such purposes by issuing calls for assistance to an area, or notifying officials of a false hazard or threat that requires a response”.
Second, it could create crises that are not easily resolved, such as undermining social cohesion.
In December, for instance, rumours circulated via social media that London’s West End was the target of a terror attack, leading to a major stampede as people panicked and attempted to flee the area.
Law enforcement was also activated and deployed as there were reports of gunshots. Investigations later revealed that an altercation between two men had actually sparked off the chaos.
In this scenario, adversaries could have easily exploited both the bedlam and the fact that law enforcement was preoccupied with dealing with the crises to carry out attacks in other locations.
In 2013, a Brown University student, Sunil Tripathi, was misidentified as one of the Boston Marathon bombers. This resulted in a manhunt for the wrong individual, as well as the harassment of his family.
In 2015, following the Paris attacks, a photo of Veerender Jubbal, a Canadian of Sikh heritage, was doctored and mis-captioned to portray him as one of the terrorists involved.
The photo was so convincing that a Spanish newspaper ran the story. Jubbal is still reportedly treated suspiciously even though it was proven that the story was inaccurate and the photo of him was a hoax. Both examples highlight that certain ethnic or religious groups and individuals can become targets of hate on the basis of misinformation.
It is unrealistic to think that misinformation generation and dissemination will be eliminated completely in the immediate aftermath of an incident, even with the CSO. Other measures will be required to help manage the risks as well.
First, crisis communication gurus often suggest that an official communications channel/source needs to be established to ensure that accurate and regularly updated information is disseminated to the public.
The Queensland Police, for instance, became the sole source of information during the 2011 floods, and used their official social media channels to provide information to the public.
The Singapore Police Force’s (SPF) official social media channels/pages and apps will be useful in this regard.
Second, active “mythbusting’” on SPF’s social media channels will be useful. The public can send information via the Police@SG and SG Secure apps to the SPF, who can then verify and disseminate the correct information.
Even without the use of apps, any information provided by the public that is helpful to the investigations would certainly be welcomed. The bottomline is that it is imperative that any information being disseminated is accurate.
The CSO enables Singapore to take a pre-emptive approach to managing risks and crises that could develop as a result of misinformation generation and dissemination, immediately following a major security crisis.
This is crucial in an age of deliberate online falsehoods.
ABOUT THE AUTHOR:
Damien D Cheong is a Research Fellow in the National Security Studies Programme at the S. Rajaratnam School of International Studies of Nanyang Technological University.