Focus

Outlook on the Regulatory Trends of Artificial Intelligence (AI)

Meng-Han Chiang
Associate at TWSE

Foreword

Given the rapid progress of emerging technologies, the rise of artificial intelligence technology has affected human life and work patterns. Various industries and value chains are gradually exploring and applying artificial intelligence (AI) technology, and a new technological paradigm is taking shape. While AI technology is expected to significantly improve production and service efficiency, it brings unprecedented risks and challenges, causing the focus of regulatory authorities in various countries on AI technology to expand from simply discussions of cost and benefit issues to risk management, and new regulatory policies have been successively introduced.

Technological Innovation vs. AI Supervision, the US and Europe Are Trying to Find a Balance

For the development of AI technology, whether regulatory authorities only need to provide principled norms, or establish clear regulations and prohibit specific applications in order to achieve a good balance between technological innovation and technology supervision is an important issue that governments around the world will soon face. Undoubtedly, the trend of AI supervision in the United States and the European Union is one of the key factors affecting the policy direction of various countries. With respect to the latest policies on AI supervision in the United States and the European Union, the following summary is provided

Firstly in the United States, the National Institute of Standards and Technology (NIST), although not a regulatory agency, provides various technical standard frameworks and guidelines for businesses or organizations to follow, which are widely adopted by various industries voluntarily. NIST also released the Artificial Intelligence Risk Management Framework 1.0 (AI RMF 1.0) on January 26, 2023, which proposed seven characteristics for evaluating the reliability of AI systems: 1. effectiveness and reliability, 2. safety, 3. information security and resilience, 4. accountability and information transparency, 5. explainability and interpretability, 6. privacy protection, and 7. fairness and harmful bias management. This framework can assist enterprises or organizations in various stages of the AI system lifecycle, such as design, development, testing, validation, till final deployment and continuous monitoring. By identifying and assigning responsibilities and missions to relevant important roles, the reliability of AI systems in all aspects can be comprehensively improved, enabling organizations to have the ability to manage AI risks.

In addition, US President Biden issued Executive Order No. 14110 on the safe, secure, and trustworthy development and use of artificial intelligence on October 30, 2023, intending to promote responsible development and application of AI technology in various industries, and to emphasize the importance of protecting the basic rights and privacy of US citizens. This Order proposes eight key action objectives, including:

  1. Establishing standards for AI safety:Require AI companies to share key information such as their safety testing results with the federal government, and establish relevant safety standards, testing tools, and testing methods for AI systems to ensure their safety.
  2. Protecting Americans’ privacy:Promote the development and application of Privacy enhancing technologies (PETs), and integrate these PETs into AI workflows to protect privacy and combat societal risks.
  3. Promoting equity and civil rights:Resolve and prevent discrimination exacerbated by AI algorithms to ensure equity and justice in the use of AI technology.
  4. Comprehensive protection of consumers:The potential harms of AI applications in key areas such as Medicare and education should be reduced, and appropriate measures should be formulated to safeguard consumers’ rights.
  5. Supporting labor:Reduce the risks of work pattern changes caused by the use of AI, and strengthen the vocational training and development of AI labor.
  6. Promoting innovation and competition:Fund education, training, development, research, and capabilities related to AI, resolve intellectual property issues in AI, and expand the retention of professionals in the key fields of the United States.
  7. Enhancing the leadership position of the United States in AI technology:Expand cooperation with international partners, promote the establishment and implementation of global AI standards, and jointly resolve global challenges.
  8. Ensuring government accountability and effective use of AI: Develop guidelines for the use and procurement of AI by federal government agencies, and accelerate the hiring of AI professionals by government agencies.

On December 9, 2023, the European Commission announced that a political agreement had been reached between the European Parliament and the European Council regarding the European Artificial Intelligence Act (EU AI Act). The Act was also overwhelmingly passed by the European Parliament in March 2024. Once reviewed and officially approved by the European Council, the Act is expected to be fully implemented 24 months after it takes effect, becoming the first international comprehensive AI legal framework and a model for global AI legislation. This Act aims to uphold basic human rights, democracy, the rule of law, and environmental sustainability, while ensuring that AI systems are trustworthy and safe. Based on a risk oriented approach, AI systems are divided into four levels (unacceptable risk, high risk, limited risk, and minimum risk), and supervision operations are carried out to varying degrees according to the level of risk. The definition of relevant risks and legal obligations are explained as follows:

  1. Unacceptable risk:These pose a serious threat to EU values, for example, the possible existence of exploiting protected groups in the social scoring system, and such systems will be banned.
  2. High risk:Systems that negatively impact safety or fundamental rights, such as HR automatic recruitment and classification systems, which have strict regulatory requirements, including log traceability, good data governance and a certain degree of accuracy, and there must be a human supervision mechanism in place.
  3. Limited risk:There is a risk of identity impersonation or fraud, for example, interactive AI systems may have the risk of falsifying content. Such systems are subject to transparency requirements, and the user must be informed that the object of interaction is an AI system. 
  4. Minimum risk:Other risks that do not fall under the definitions above, such as spam filtering systems, which are not subject to special requirements.

In summary, the current international supervision attitude towards the development of AI is to attempt to seek a balance, which does not hinder innovation, but also ensures the ethics, compliance, and safety of AI technology. At present, Taiwan's policy direction for AI technology is consistent with the international trend, which focuses on the formulation of core principles and provides relevant guidance, but has not yet provided specific restrictions for specific types of applications or algorithms.

Drawing Lesson from the US and Europe, the FSC Officially Announced the Principles of AI Supervision

In April 2023, the Executive Yuan of Taiwan approved the release of the “Taiwan AI Action Plan,” hoping to use AI technology to drive the country’s overall industrial transformation and upgrading. Therefore, in August of the same year, the Financial Supervisory Commission (hereinafter referred to as the FSC) formulated a draft of the “Core Principles and Relevant Promotion Policies for the Use of Artificial Intelligence (AI) in Financial Industry” based on the aforementioned policy, and officially announced the “Guidelines for the Use of AI in the Financial Industry” on June 20, 2024. The six application principles in the Guidelines include establishing governance and accountability mechanisms, valuing equity and people-oriented values, protecting privacy and rights, ensuring system robustness and security, implementing transparency and explainability, and promoting sustainable development, which are similar to the direction of international AI supervision that equally emphasizes public privacy, consumer rights, social equity and responsibility, and ensures effective risk management.

The Guidelines integrate NIST of the United States to present AI systems in a lifecycle manner, while taking into account EU’s risk-based classification approach. In short, the main framework of the Guidelines centers around six core principles; first, the core focus of AI systems in each lifecycle stage and the ways in which risk management can be implemented are defined, then based on the system risk, advanced management measures are adopted for AI systems with higher risks, including logging, monitoring mechanisms, rigorous review and approval procedures, and independent auditing or evaluation mechanisms.

It is worth mentioning that compared with the four risk levels proposed by the European Union, the evaluation methods mentioned by the FSC are more flexible, allowing enterprises or organizations to use their own internal risk management mechanisms for comprehensive assessment. The FSC also provides six reference indicators to determine whether an AI system belongs to the high-risk system category, including: 1. it directly provides customer service or has a significant impact on operations; 2. it has a high degree of use of personal data; 3. it has a high degree of system autonomy; 4. the complexity of the system is high; 5. it has a deep and wide impact on different stakeholders; 6. the relief options are not complete.

In addition to the aforementioned Guidelines, regulatory authorities are actively requesting industry associations to revise relevant regulations for business operators to follow based on the characteristics and operating methods of different industries. At present, the Bankers Association announced the implementation of the “Operating Standards for the Use of Artificial Intelligence Technology by Financial Institutions” on March 14, 2024 after obtaining approval from the FSC. The Operating Standards involves the application of AI technology in customer data protection and risk management, direct interaction with consumers and providing financial product advice, as well as providing customer services that affect customer financial transaction rights or have a significant impact on operations, and the Operating Standards applies to all of the above. However, in practice, some banking operators have already advanced their deployment and applied AI technology to existing operating activities to improve work efficiency. Therefore, since the announcement and implementation of the Operating Standards, some operators have expressed difficulty in adjusting and meeting regulatory requirements in a short period of time, hoping to seek a buffer period.

The TWSE Is Keeping Pace with the Times, and Will Gradually Launch the AI Supervision Policy for the Securities Market in the Second Half of the Year

In the securities market, due to the increasingly rampant use of AI technology in deepfake fraud, the TWSE has assisted the Taiwan Securities Association in revising the “Self-discipline Regulations on the Information Security of Emerging Technology” in September 2022, which stipulates that securities firms should strengthen verification when using image and video methods for identity verification. The TWSE also recommends regular courses conducted to enhance employee awareness to help securities firms pay attention to frauds arising from the manipulation of new technology.  In December 2023, the TWSE officially formulated relevant regulations in the “Establishing Information Security Inspection Mechanisms for Securities Firms” and the “Internal Control System Standard Specification for Securities Firms - CC-21100 Emerging Technology Applications,” which stipulate that securities firms should strengthen verification and combine other verification factors (such as uploading identity documents and using mobile SMS OTP) when using image and video communication for identity verification. The TWSE also requires securities firms to regularly conduct information security training on deepfake awareness and prevention issues, and suggests that securities firms may incorporate AI deepfake related training into their annual information security courses.

For other scenarios of AI technology application, including generative AI, the TWSE is currently developing AI related security control guidelines in accordance with the guidelines of the FSC and by referring to the relevant AI operating standards of the Bankers Association in the “Guidelines for Cyber Security Control of Emerging Technology in Securities and Futures Market Related Associations.” These measures are expected to be submitted to the competent authority in the third quarter of this year. Upon approval by the competent authority and the inclusion of AI management requirements in the self-discipline regulations of the Securities Association. The TWSE will also timely formulate regulations such as the “Establishing Information Security Inspection Mechanisms for Securities Firms” and the “Internal Control System Standard Specification for Securities Firms” for securities firms to follow.

Conclusion

AI technology brings abundant technological innovation, but also poses unprecedented challenges to various sectors. In order to prevent the abuse of AI technology from harming the rights and interests of enterprises and the public, regulatory authorities in various countries are trying to clarify the responsibilities that relevant businesses should bear. Drawing lesson from the policies of the United States and the European Union, Taiwan's  supervision of AI technology is consistent with the international trend in core principles such as privacy protection, safety, and transparency. In addition, in a step-by-step manner, the regulatory authorities and relevant units started from the highly supervised financial industry to jointly establish the supervision framework, and are committed to creating a dynamic balance between innovation and supervision aspects, in order to promote a healthy development of Taiwan's financial market.

Looking ahead to the future, facing the rapidly growing AI market, the securities industry needs to continue and actively communicate with various sectors, pay attention to international regulatory trends, and improve relevant regulations. Under the premise of ensuring technological ethics, safety, and compliance, enterprises can explore the innovation and diversified application of AI technology with peace of mind, and create more opportunities for securities market.

Top