In brief

The State Bank of Vietnam (SBV) has released a draft Circular on safety, risk management, and implementation requirements applicable to artificial intelligence (AI) applications in the banking sector (“Draft Circular”). This Draft Circular introduces a unified regulatory framework for AI adoption across the banking industry, aiming to ensure operational safety, strengthen governance, and protect customers’ lawful rights and interests. Its development is a necessary and urgent step to promptly implement requirements under the Law on Artificial Intelligence in the banking sector, comply with Government directives, and enable consistent, safe, and effective deployment. Once issued, the Draft Circular will apply broadly to SBV regulated entities, including credit institutions, foreign bank branches, intermediary payment service providers, credit information companies, the Vietnam Asset Management Company, and the Deposit Insurance of Vietnam. This Draft Circular is expected to be issued in 2026.

In depth

  1. Scope of regulation and subjects of application
  • The Draft Circular applies to credit institutions, foreign bank branches, payment intermediaries, credit information companies, the Vietnam Asset Management Company, and the Deposit Insurance of Vietnam.
  1. Safety requirements in the application of AI
  • The Draft Circular requires the regulated entities to ensure that all AI applications operate safely, securely, and in full compliance with SBV’s technical standards. Regulated entities must implement robust model‑management procedures, maintain continuous monitoring, log and store AI‑system operational data, and establish mechanisms to promptly detect, respond to, and remediate incidents. Internal users must be provided with clear guidance on system capabilities and limitations, and institutions must ensure timely handling of staff feedback to prevent operational risks and maintain system integrity.
  1. Risk management in application of AI
  • The Draft Circular adopts a risk‑management regime requiring regulated entities to classify AI systems by risk level; apply appropriate levels of human oversight; conduct impact assessments for high‑risk AI; and implement end‑to‑end lifecycle controls covering design, testing, monitoring, model drift detection, updates, and safe decommissioning. The regulated entities remain fully responsible for the safety and compliance of outsourced or third‑party AI and must maintain supply‑chain security, independent audits, and continuous oversight of vendor‑supplied models in case of outsourcing.
  1. Conditions for deploying AI applications
  • Prior to deployment, the regulated entities must complete AI risk‑classification dossiers, information‑security testing, and — where applicable — high‑risk AI impact assessments; establish operational‑safety and incident‑response plans; and define objective readiness criteria such as model accuracy, system‑performance indicators, and security‑assessment results. The regulated entities must also have adequate human resources, ensure competence of outsourced providers, implement ongoing staff training, and conduct annual capability reviews.
  1. Protection of customer rights and no discrimination
  • The Draft Circular requires full transparency for AI systems that interact with customers, mandating clear disclosure that the customer is engaging with AI and requires explanation of key factors behind automated decisions that affect customer rights. Customers must be able to request human review of AI decisions, and the regulated entities must maintain complaint‑handling processes with mandatory human reassessment of disputed outputs.
  • The regulated entities are prohibited from deploying AI in ways that exploit vulnerable customer groups or result in unfair, biased, or discriminatory outcomes. They must monitor and prevent algorithmic bias in training data, models, and outputs, especially regarding gender, ethnicity, religion, disability, age, or socio‑economic disadvantage.
  1. Reporting regime
  • Regulated entities must report serious AI incidents to the SBV within 24 hours of detection and submit a remediation‑completion report within five working days after resolving the incident. They are also required to submit ad‑hoc reports upon request of the SBV.
Explore More Insight