Focus

A Study on the Regulatory Trends and Practices of Artificial Intelligence in Capital Markets through International Case Studies

En-Chi Su
Associate at TWSE
The rapid evolution of Artificial Intelligence (AI) has brought operational innovation and immense potential to capital markets, while also giving rise to deep-seated risks such as model fairness, decision-making transparency, and systemic security. In response, major global economies including the European Union, the United States, the United Kingdom, and Singapore are actively constructing risk-based regulatory frameworks, seeking a balance between incentivizing innovation and effective control. This report aims to analyze the common principles and differences in various national regulatory strategies, and by integrating industry practices, proposes a forward-looking and practical AI governance blueprint to assist enterprises in harnessing the strategic competitive advantages of AI under a risk-controlled premise.
This report finds that a robust AI governance system needs to be constructed through a dual approach of bottom-up "technical mitigation" and top-down "organizational governance." At the technical implementation level, risk management should be embedded throughout the entire lifecycle of the AI system. This involves establishing layered control measures—from data privacy and quality at the foundational layer, to infrastructure security and model stability at the middle layer, and up to application-level human-computer interaction—to prevent risks at their source. At the organizational management level, it is necessary to establish comprehensive governance and assurance mechanisms. This includes creating cross-functional AI governance committees, implementing red teaming exercises, and strengthening incident response management to ensure that the development and application of AI remain highly aligned with the enterprise's overall strategic objectives, regulatory obligations, and social responsibilities.
In conclusion, this report recommends that enterprises adopt a systemic thinking approach, viewing AI governance as a core operational capability rather than a mere compliance issue. The first priority is to establish internal standards for AI risk identification and classification, and based on risk levels, formulate a complete governance mechanism that spans model development, data management, and deployment monitoring. Concurrently, a cross-departmental governance team with clear responsibilities should be established, along with transparent communication channels with internal and external stakeholders. Through this dual-cycle governance framework, enterprises can not only effectively respond to the rapidly changing environment but also transform AI risks into opportunities to build trust, thereby achieving sustainable and responsible innovation in this wave of intelligent finance.

Top