Directors Lim Chong Kin and Cheryl Seah quoted in ALB article discussing AI regulation and governance in Singapore

19 Feb 2024

Head of Telecommunications, Media & Technology Practice Group, Lim Chong Kin, and Director Cheryl Seah are featured in the Asian Legal Business (ALB) article titled, “Singapore looks to take lead in ethical AI governance”, sharing their insights on Singapore's regulatory landscape concerning Artificial Intelligence (AI).

Commenting on the Singapore regulators’ approach to addressing AI-related issues, Chong Kin and Cheryl opined that the regulators “…take a measured and pragmatic approach…”. With any new technology or new way of doing things, there will be risks and uncertainties, and the Singapore regulators manage this in a 3-pronged approach:

  1. by issuing guidance to the industry on factors to consider/steps to take to ensure the AI systems they develop adhere to internationally-accepted AI governance principles (e.g. do not produce discriminatory results, and where the decisions/results can be explained);
  2. by keeping an eye on international developments and also contributing to the international discourse – Singapore participates in discussions with the United Nations, World Economic Forum, etc., and has recently affirmed the Bletchley Declaration on AI safety. Closer to home, the Association of Southeast Asian Nations (ASEAN) published the ASEAN Guide on AI Governance and Ethics at the 4th ASEAN Digital Ministers Meeting held in Singapore in February. A recent legal update on this can be accessed here: https://bit.ly/49pc0Cu; and
  3. by working closely with the industry to understand their concerns, so that any legislation or guidelines promulgated will be effective.

It is noted that “Singapore’s regulators have a close partnership with the industry, as they believe that no single entity (government, industry or research institute) holds all the answers on how best to regulate the use of AI.” Chong King and Cheryl observed that “[m]any of Singapore’s key AI documents – e.g. the Model AI Governance Framework, as well as AI Verify (an AI Governance Testing Framework and Toolkit) – were developed in consultation with the industry”. Singapore has also conducted a series of public consultations to seek public feedback on areas such as the use of AI in biomedical research, as well as how personal data may be used to develop and deploy AI systems.

Chong Kin and Cheryl highlighted that a challenge in developing AI regulations, in addition to clearly defining what AI is, is ensuring that “it is not prohibitive for businesses (especially small businesses) to comply with the testing processes (especially if testing is mandated before the AI system can be put on the market). And if external auditors are to have a role in AI testing processes, to ensure that they are qualified/accredited. Regulators will thus need to develop deep expertise in this area too”.

In the meantime, Singapore’s existing laws will apply to address the challenges brought about by the use of AI. For example, when copyrighted material is used to train AI models without the consent of the copyright holders, “Singapore’s Copyright Act 2021 has provisions concerning fair use (section 190) as well as for computational data analysis (section 244), although some academics have taken the view that section 244 will not apply to AI that has a generative rather than analytical function”, added Chong Kin and Cheryl.

Where it comes to an individual’s privacy, the Personal Data Protection Act 2012 (PDPA) will apply as the Act is technology-agnostic. The data protection authority also held a public consultation in July 2023 seeking public feedback on the guidelines to be issued on the PDPA’s application to the collection and use of personal data to develop and deploy AI systems.

Where it comes to liability, existing tort and contract law principles can provide a solution. Chong Kin and Cheryl pointed out that “[t]he unique features of AI - it is a black-box, and can learn from experience without being explicitly programmed - may pose some challenges to these principles, but the common law develops incrementally and flexibly and we are confident that our courts will be able to deal with it”, and “Singapore already had a case (Quoine Ptd Ltd v B2C2 Ltd [2020] SGCA(I) 02) which dealt with algorithms executing contracts without involving humans, although the program in that case was deterministic (i.e. it will always produce the same output with the same input and not develop its own responses to varying conditions). It would be interesting to see how the principles apply to AI where it is non-deterministic.”

“Ultimately, the responsible use of AI is what is important, more so than how the use is regulated”, Chong Kin and Cheryl concluded.

You may read the full article on pp.6-7 of ALB January/February 2024 issue here.

Get in touch