Focus

Applications of AI-assisted information security auditing

Simon Lin
Senior Associate at TWSE

Challenges in information security auditing

As global capital markets face unprecedented digitalization challenges, the risk of information security incidents is increasing accordingly. Information security auditing plays a crucial role in ensuring operational security and regulatory compliance in the daily operations of securities firms; however, current frameworks continue to face multiple structural challenges. First, the corrective actions for audit findings continue to focus on “formalistic compliance.” Companies often rely on adding process documentation, policy materials, or screenshots as evidence of corrective actions; however, such information fails to substantively prove that control processes, such as annual inspections, access privilege reviews, and supplier security assessments, have actually been implemented. In recent years, the growing emphasis by regulatory authorities on “substantive governance” has resulted in an increasing risk of purely formalistic corrective action approaches. Without an institutionalized mechanism that can fairly evaluate the effectiveness of corrective actions, companies will face challenges in ensuring information security controls to achieve the desired outcomes.

Secondly, the heavy reliance on manual mapping of regulations and standards brings another core challenge. Securities firms are required to be simultaneously compliant with multiple standards including “Establishing Information Security Inspection Mechanisms for Securities Firms,” self-regulatory guidelines developed by business association, ISO 27001, NIST CSF, Personal Data Protection Act and internal control systems. The different frameworks, terminology, and control levels defined by these standards and regulations demand significant human and time resources for cross-referencing. More importantly, manual mapping is prone to subjective judgment; different auditors may reach divergent conclusions on the same finding, leading to inconsistent corrective actions and compromising the stability of audit quality.

Moreover, the audit process often lacks “structured data” and “traceability.” Most companies continue to rely on hard copies, including Excel or Word files and emails, to respond and track, resulting in the fragmentation of deficiency status, improvement progress, supporting evidence, and communication records across multiple systems. Consequently, when the competent regulatory authority requests improvement history, it is often difficult for companies to promptly retrieve comprehensive audit log documentation. Meanwhile, the lack of centralized data management also makes it difficult for companies to conduct long-term analysis of corrective action effectiveness and identify which control measures fail most frequently, as well as which departments show poor improvement efficiency, thereby limiting the possibility of deepening governance strategies.

Lastly, as the above challenges coexist, audit practices are increasingly under pressure from three factors, including “rising costs, declining efficiency and limited quality.” In the absence of technological support, it is difficult for companies to improve governance maturity and meet the supervisory authority’s expectations for enhanced information security. Hence, the adoption of AI to improve mapping efficiency, enable structured data management, and reinforce traceability and transparency has become an inevitable trend for addressing these systemic issues.

Technical values of AI-assisted information security auditing

AI technologies can effectively address problems in traditional audit practices including low efficiency, large-volume and fragmented data and inconsistent judgment standards. The core value of AI lies in its high processing capacity, high accuracy, and standardizable capabilities enabled by NLP (natural language processing), OCR, semantic matching, and automation models. Through NLP, AI can automatically parse deficiency descriptions and map them to the correct regulatory provisions. For example, deficiencies in an “annual mobile application testing” can be quickly linked to articles about annual testing in the checking mechanism. Similarly, a deficiency involving outsourced security measures may be automatically matched to related requirements, significantly reducing the manual effort of provision-by-provision reviews, improving mapping consistency, and effectively preventing instability caused by subjective judgment. Secondly, OCR technology can automatically convert unstructured data such as PDF files, scanned images, and screenshots into machine-readable text. This allows AI to verify whether the evidence contains mandatory information and to determine if dates, scopes, approvers, and testing items are specified in reports, as well as to check if documents are still valid. This capability can not only substantially reduce the time required for manual review but also improve the quality of evidence assessment.

In addition to providing optimized suggestions, AI can automatically generate recommended corrective actions for deficiencies, highlight potential enhancements, and even analyze historical data to estimate timelines for corrective actions. These capabilities not only contribute to the standardization of audit practices but also allow auditors to rapidly comprehend corrective action requirements. Meanwhile, the data analytic capabilities of AI can increase audit transparency by automatically calculating metrics such as deficiency duration, reasons for corrective action delays, risk prioritization, and departmental improvement performance through algorithms, thereby enabling management to track improvement progress more efficiently. In summary, AI can transform the audit process from labor-intensive to data-driven, standardized, and quantifiable workflows, thereby enhancing governance maturity and helping companies address an increasingly complex information security environment. 

Global application cases

In the global finance market, AI has already become an important tool for auditing and information security governance. In contrast to Taiwan, where adoption is still in its early stages, financial institutions in Europe and North America have established dedicated AI-based audit platforms and governance frameworks. In this article, we conducted a systematic analysis of four globally recognized tools – AuditBoard, Microsoft Security Copilot, Lucinity, and Vectra AI – to examine the mature application models of AI in information security audits and explored their potential benefits for securities firms in Taiwan. 

AuditBoard: one of the most mature auditing and internal control management platforms globally. AuditBoard’s AI models support capabilities such as control framework mapping, automated comparison of audit items, deficiency classification, corrective action tracking, and automated generation of audit working papers. The core value of this platform lies in the “standardization and traceability of document-based auditing.” By running semantic matching on deficiencies, control items and applicable regulations, AuditBoard automatically generates corrective action recommendations and tracking items to enable a fully digitalized process covering audit planning, execution, tracking and reporting. This platform has been adopted by numerous publicly listed companies, banks, and insurance companies in the United States and can be considered a model of institutionalized AI-based auditing.

Microsoft Security Copilot: an information security analysis tool that focuses on GenAI and threat intelligence and has been massively adopted by large banks and financial institutions. The most prominent feature of this platform is its capability to convert information security governance and threat data into audit-ready evidence. It can automatically compile hundreds of thousands of SIEM logs, threat events, and vulnerability scanning results into summaries that are readable to auditors. In the past, such information required extensive manual compilation by the security team; now, AI can directly generate corrective action recommendations for specific audit purposes, allowing auditors to quickly assess the effectiveness of information security controls. 

Lucinity: an AI Case Management solution for the financial industry and mainly applied to anti-money laundering and risk case management. However, its system structure is also applicable to information security auditing. The key advantage of this platform lies in providing a complete case lifecycle framework, in which deficiencies or anomalies are treated as cases and managed through AI-enabled risk classification, prioritization, progress tracking, evidence records, and cross-review. With a design feature that closely aligns with the governance requirements of regulatory technology, this platform automatically records all decision-making processes, ensuring that the audit process is supported by a complete audit trail. 

Vectra AI: a tool that focuses on behavior analytics and threat detection, with AI algorithms that are also highly applicable to audit practices. By analyzing network behavior patterns, this platform identifies abnormal activities and automatically generates risk rankings, which are particularly important for auditing. Instead of manually reviewing extensive logs, auditors can determine whether the effectiveness of control items should be enhanced based on AI-generated risk summaries and rankings. It also automatically classifies all threat events, helping companies efficiently generate annual risk evaluation reports.

AI-assisted audit practices in domestic securities firms

In audit cases of Taiwan securities firms, an analysis was conducted to verify the feasibility of AI-assisted deficiency tracking and compliance determination. A cross-comparison between AI-based and manual practices was performed in terms of deficiency descriptions, corrective action implementation status, provision comparison, and evidence verification. The results show that AI demonstrates a clear advantage in processing large-scale mapping tasks and improves information transparency. Still, final judgment requires human intervention to ensure the accuracy of regulatory interpretation, contextual understanding, and control effectiveness evaluation. 

First, in the context of “mandatory annual testing items,” which are explicitly specified as annual control requirements in “Establishing Information Security Inspection Mechanisms for Securities Firms,” auditors usually assess deficiencies by manual cross-referencing between textual descriptions and regulatory provisions to determine non-compliance with annual testing requirements. The AI-assisted analysis results show a high degree of consistency with human judgment. By automatically parsing meanings in deficiency descriptions, such as “annual testing” and “mobile applications,” and mapping them to relevant regulatory provisions, AI can determine that the deficiency has not yet been corrected. Compared with the manual audit process – which involves reviewing regulatory provisions, confirming the year, verifying evidence, and assessing plan progress – an AI-powered system can complete all tasks in a significantly shorter time frame, providing indicators including estimated corrective action completion levels, deficiency duration, and completeness of evidence to create a more measurable tracking pattern.  

In another case, a total of eight deficiencies were identified in the annual audit; among these, five fell under “Establishing Information Security Inspection Mechanisms for Securities Firms” and three were governed by “Self-Regulatory Rules on Information System and Service Supply Chain Risk Management for Securities Firms by the Taiwanese Securities Association.” Auditors conducted a clause-by-clause review to determine the sufficiency and reasonableness of the presented evidence. For example, when a deficiency involves “unavailable supplier evaluation criteria,” auditors are required to review whether the securities firm has added evaluation templates, approval workflows, and annual review records as supporting evidence. At this stage, AI also demonstrated its capabilities to provide support. With OCR technology, AI converted all evidence (PDF files, screenshots, and audit records) into machine-readable text and then applied a semantic matching tool to analyze whether the evidence contains required elements, such as dates, approvers, and scopes. AI can also rapidly assess whether the corrective actions effectively address the original control gaps. However, while AI can accurately identify the “potentially applicable regulatory provision,” it remains less effective than human judgment when “selecting the most applicable provision.” Auditors are capable of determining which provision forms the most critical basis by considering the specific processes of each company, and this function cannot yet be fully replaced by AI.

Differences and complementarity between AI and human judgment

AI and human operations play distinct yet complementary roles in information security auditing. To effectively adopt AI, companies must clearly understand the respective capability boundaries of each party. AI is proficient in large-volume, repetitive, and rule-based tasks, such as regulatory mapping, OCR-based recognition, verification of evidence elements, deficiency classification, and anomaly flagging. AI demonstrates advantages including high speed, fatigue immunity, the ability to handle large datasets, and high judgment consistency, allowing it to effectively take over the most time-consuming and error-prone tasks to significantly reduce human workload. However, human judgment remains irreplaceable, especially in scenarios involving risk assessment, context understanding, evaluation of control effectiveness, and interpretation of regulatory intent. For instance, while AI can verify if report dates are correct, it cannot perform assessments that heavily rely on human experience, such as “whether a supplier really implements security measures,” “whether the corrective action can effectively mitigate risks” or “whether controls can support operational requirements.” Moreover, in exceptional scenarios like emergency program changes, the critical thinking skills of human auditors remain irreplaceable.

Findings of practical cases of domestic securities firms show that:

  • AI demonstrates significantly higher efficiency than human auditors in document processing and regulatory mapping tasks.
  • The assessment direction of AI and human judgment generally aligns, but human auditors remain responsible for the final interpretation of exceptional cases. 
  • The linear logic of AI can increase transparency, but human interpretation remains required for situational issues.
  • AI supports long-term deficiency tracking and corrective action prediction, while human auditors remain responsible for strategic judgment. 
  • Although AI demonstrates a high degree of accuracy, the risk of semantic misinterpretation still exists; therefore, human review remains indispensable in auditing.

Consequently, the optimal auditing model is “AI-assisted preliminary assessment with human reviews,” where AI processes large volumes of data and generates initial conclusions, and human auditors are responsible for validation, interpretation, and final decision-making. This model achieves an ideal integration of efficiency, accuracy, transparency, and regulatory acceptability.

AI database and institutional recommendations

The effective implementation of AI in domestic information security auditing systems is not merely a technical challenge, but a process of establishing a series of “data governance mechanisms” and an “AI application framework.” By referencing international and domestic empirical practices, this article proposes concrete suggestions across three aspects – data, framework, and organizational structure – to support the transition from the existing traditional audit process to AI-driven smart auditing.      

From a data perspective, the effective operation of AI fundamentally relies on companies’ ability to provide computable and structured data sources; therefore, establishing a dual-core architecture consisting of “data lake” and “data warehouse” forms a prerequisite for AI adoption. The data lake serves as a repository for large-volume unstructured data, including annual testing reports, supplier contracts, policy documents, audit evidence, and system screenshots. At the same time, the data warehouse is responsible for storing structured data, such as deficiency records, audit tracking logbooks, version control, system maintenance metrics, and audit mapping results. By leveraging this architecture, AI can perform OCR extraction, semantic matching, rule-based judgment, and tracking analysis within the same data pool, transforming audit workflows from processes that heavily rely on manual input into ones that support automated analysis. 

Second, securities firms should establish a formal AI governance framework to ensure the transparency, explainability, and compliance of AI in the audit process. This framework should incorporate five core elements, including (1) data quality management, involving periodic updates of regulatory documents and audit data; (2) model bias management, referring to regular validation of LLM and AI models; (3) explainability, whereby the logic of AI-generated judgments can be demonstrated; (4) AI model version control, ensuring the model used for audits is clearly documented; and (5) human accountability, making sure the final audit judgment remains the responsibility of human auditors. This framework can effectively mitigate compliance risks arising from misjudgment and aligns with the information security and AI risk governance requirements progressively strengthened by the Financial Supervisory Commission. Lastly, at the organizational level, it is crucial to develop a cross-functional AI promotion framework. The institutional benefits of AI are unlikely to materialize when it is solely driven by the information department or independently utilized by the audit department. Therefore, securities firms are recommended to establish the following structure:

  • AI governance department: A cross-functional unit involving representatives from audit, information technology, compliance, and management departments, responsible for developing AI-assisted audit strategies and defining manual review processes and data usage policies. 
  • Data and technology office: Responsible for the operation and maintenance of data lake, data warehouse and AI model.
  • Digitalized audit group: Responsible for developing specific operation processes of AI adoption, including deficiency standardization, evidence formatting, and regulatory knowledge base maintenance.

These three organizational layers ensure that AI not only serves as a tool in the company, but also as an “institutionalized and sustainable” governance process. Furthermore, for AI to effectively assist in auditing, companies must enhance the standardization of audit data. Currently, many securities firms retain documentation that is difficult to structure, thereby limiting AI’s capabilities to process and analyze such data. To improve the efficiency of AI adoption, it is necessary to develop a standardized framework to gradually transition to automated and continuous auditing models.

To sum up, for the effective implementation of AI in information security auditing, companies must develop robust data foundations, institutionalized governance frameworks, and cross-functional promotion structures simultaneously, as well as advance the formatting of audit data. Only through this approach can AI become a tool that genuinely enhances governance maturity, acting not merely as a new technology but as the core infrastructure for reshaping audit processes and improving the quality of control measures.

Top