Hong Kong, as an international financial centre, seeks to position itself as a global technology and innovation hub, and integrate into the development of the Guangdong-Hong Kong-Macao Greater Bay Area. The exponential growth of the artificial intelligence (AI)industry is undoubtedly crucial to such mission.
The Hong Kong Government’s ongoing efforts to facilitate the development of AI include the Innovation and Technology Fund and the Innovation and Technology Venture Fund. The former scheme finances research and innovation projects. The latter scheme seeks to attract venture capital funds to co-invest in local innovation and technology start-ups.
In February 2023, the Financial Secretary announced that the government will conduct a feasibility study on the development of an AI supercomputing centre in the city. This would be a major strategic technology infrastructure designed to provide AI supercomputing services to universities, research institutions, government departments, and industry sectors. Such a centre aims to strengthen local research and development capabilities, advance relevant industry ecosystems, and accelerate the development of the AI industry overall.
Given AI’s coming of age, which has been termed ’the fourth industrial revolution’, and the various strides made in this area in Hong Kong specifically, the jurisdiction must grapple with the regulation of this powerful technology.
The possibility of legislation
At present, Hong Kong does not have specific legislation dealing with AI.
In October 2021, the Innovation, Technology and Industry Bureau of the Government announced that it would propose a new cybersecurity law, which would define the cybersecurity obligations of critical infrastructure operators, with a view to strengthening such infrastructure in the city. As of May 2022, the government expressed that it was continuing with its preparatory work and stated its intention to launch a consultation exercise soon. Such legislative proposal may include provisions that have implications for AI technology.
In September 2022, the Consumer Council, an independent statutory body for consumer welfare, stated that the government should consider introducing legislation to better regulate AI in order to protect consumer rights. The Council pointed out that there were no consequences to breaching any existing guidelines (which will be discussed below).
The existing guidelines
Various regulators and professional bodies have promulgated sector-specific guidelines concerning the use of AI in the city. The most prominent examples include guidelines put forward by the Office of the Privacy Commissioner for Personal Data, the data protection regulator, and the Hong Kong Monetary Authority, the central banking institution.
In August 2021, the Privacy Commissioner published its ‘Guidance on Ethical Development and Use of AI’, which aimed to help organisations develop and use AI in compliance with the Personal Data (Privacy) Ordinance (Cap. 486).
In step with international standards, the guidance endorses three fundamental data stewardship values: being respectful, beneficial, and fair to stakeholders. The guidance recommends seven key ethical principles:
- Accountability: organisations should be responsible for what they do, and provide sound justifications for their actions
- Human oversight: organisations should ensure that appropriate human oversight is in place for the operation of AI
- Transparency and interpretability: organisations should disclose their use of AI and relevant policies while striving to improve the interpretability of automated and AI-assisted decisions
- Data privacy: effective data governance should be put in place
- Fairness: organisations should avoid bias and discrimination in the use of AI
- Beneficial AI: organisations should use AI in a way that provides benefits and minimises harm to stakeholders
- Reliability, robustness and security: organisations should ensure that AI systems operate reliably, can handle errors, and are protected against attacks.
In addition, the guidance provides a practice guide to assist organisations in managing their AI systems. The guide focuses on four main areas:
- Establishing AI strategy and governance
- Conducting risk assessment and human oversight
- Executing development of AI models and management of overall AI systems; and
- Fostering communication and engagement with stakeholders.
In November 2019, the Monetary Authority published two circulars on the use of AI by banks, ‘Consumer Protection in respect of Use of Big Data Analytics and Artificial Intelligence by Authorized Institutions’, and ’High-level Principles on Artificial Intelligence’.
The first circular, which focuses on big data analytics and AI (BDAI), borrows from the Organisation for Economic Co-operation and Development’s guidelines. This circular espouses four key principles:
- Governance and accountability: the board and senior management of AIs should remain accountable for all the BDAI-driven decisions and processes
- Fairness: AIs should ensure that BDAI models produce objective, consistent, ethical and fair outcomes to customers
- Transparency and disclosure: AIs should provide appropriate level of transparency to customers regarding their BDAI applications through proper, accurate and understandable disclosure
- Data privacy and protection: AIs should implement effective protection measures to safeguard customer data.
The second circular, which addresses the adoption of AI applications, draws on principles formulated by leading overseas authorities. This circular has three key sections. The first section is on governance, which stipulates that the board and senior management should be accountable for the outcome of AI applications.
The second section focuses on application design and development, and sets out the following:
- Developers must possess the sufficient expertise
- The AI applications should be explainable
- The data involved to train AI models should be of good quality and relevant
- The validation of AI models should be conducted rigorously
- The AI applications should be auditable
- There should be an effective management oversight of third-party vendors; and
- The AI-driven decisions must be ethical, fair, and transparent.
As for the third section on ongoing monitoring and maintenance, the circular highlights that:
- The AI systems should be subject to periodic reviews and on-going monitoring
- Data protection requirements should be complied with
- Cybersecurity measures should be implemented effectively
- There must be risk mitigation controls and contingency plans in place.
The way forward
Regulating in the area of AI technology–whether by way of a formal set of legislation, or ‘soft law’ measures such as guidelines and recommendations – is undoubtedly a monumental task. It remains to be seen how Hong Kong policymakers and lawmakers will seek to create an ethical framework to simultaneously ameliorate the profound risks AI technology poses for consumers and promote the healthy development of such technology. In an evolving technological landscape, stakeholders, including consumers and businesses, must keep a close eye on the regulatory developments in this area, which will not only have a large impact on Hong Kong, but also wider implications for the Guangdong-Hong Kong-Macao Greater Bay Area and beyond.
Cordelia Yeung was Called to the Hong Kong Bar in 2018. She is a qualified barrister and is practicing in Hong Kong. She was a former Middle Temple Scholar and has been assisting The Middle Temple Society in Hong Kong since 2019.
Catrina Lam was Called to the Hong Kong Bar in 1999. She is a member of Des Voeux Chambers. Catrina was a former Middle Temple Scholar and has been serving as the Secretary for The Middle Temple Society in Hong Kong since 2009. She was appointed an Honorary Member of the Middle Temple in 2018.