On 23 December 2025, Taiwan's Legislative Yuan passed the Artificial Intelligence Basic Act ("Act"), establishing a national framework that balances AI innovation with risk-based governance. The Act designates the National Science and Technology Council (NSTC) as the central competent authority. It requires the Executive Yuan to set up a National AI Strategy Special Committee and mandates the Ministry of Digital Affairs (MODA) to develop an internationally aligned risk classification framework.

The Act codifies seven internationally prevalent principles: sustainability and well-being, human autonomy, privacy and data governance, cybersecurity and safety, transparency and explainability, fairness and non-discrimination, and accountability. It also calls for budgetary support, data openness with privacy by design/default, high-risk AI application labeling, and labor-rights safeguards.

For the private sector, the Act does not impose immediate operational obligations like the EU AI Act. Detailed obligations will be issued by sector regulators leveraging the risk classification framework to be developed by the MODA.

The Act will become effective upon presidential promulgation.

Implementation timeline

After the Act comes into force:

  • Within three months: Publish Minors/Human Rights/Gender Impact Assessment — led by MODA and jointly with NSTC, Ministry of Education (MOE), and Ministry of Health and Welfare (MOHW).
  • Within six months: Government shall review and complete risk assessments for existing government AI uses.
  • Within 12 months: Government shall establish government AI use rules or internal control mechanisms.
  • Within 24 months: Government shall review, establish, or amend the relevant laws, regulations and administrative measures to conform to the Act.

Key points of the Act

Competent Authority & Governance

  • NSTC designated as the central competent authority; local governments serve as local competent authorities.
  • Cross‑sector governance by sector‑specific regulators for domain rules.

National AI Strategy Special Committee

  • Executive Yuan to establish a Special Committee chaired by the Premier; at least annual meetings; NSTC provides secretariat.

Risk Classification & Management

  • MODA to develop an AI risk classification framework interoperable with international standards.
  • Sector regulators to establish risk-based management regulations; government authorized to restrict or prohibit harmful AI applications.

High‑Risk Applications & Evaluation Tools

  • High‑risk AI products/systems must be labeled with clear warnings/precautions; stakeholder consultation for evaluation/verification tool formation.
  • For high risk AI applications, government to clarify the attribution of liability and the conditions for responsibility, and establish mechanisms for remedies, compensation, or insurance.

Data Openness & Personal Data Protection

  • Government to build mechanisms for data openness/sharing/reuse to enhance AI data availability and quality.
  • Avoid unnecessary personal data collection/processing/use in AI R&D and applications; promote privacy by design/default.

Budget, Incentives & Innovation Support

  • Mandate to allocate budgets and provide subsidies, investments, tax/financial incentives to foster AI R&D, applications and infrastructure (e.g., data center); annual performance reporting.
  • Encourage innovation sandboxes and regulatory flexibility/exemption for AI pilots.

Labor-Rights Safeguards

  • Ensure labor rights, reduce skill gaps, increase labor participation; provide employment assistance to those impacted by AI use.

Government AI Use

  • Government must conduct risk assessment and plan mitigation before using AI to deliver services or execute tasks.

International Cooperation & Public-Private Partnership

  • Promote AI-related international cooperation and public-private partnership.

Minors Protection

  • Best‑interests principle; labeling and precautions for high‑risk applications affecting minors.

Implications for AI providers and deployers

  • Monitor MODA, NSTC and sectoral regulators for details on the AI risk classification framework (MODA has publicly targeted Q1 2026), high‑risk determinations, and evaluation tool releases.
  • Expect a wave of implementing rules and amendments across sectoral regulators within two years after the Act takes effect (e.g., finance, health, employment, consumer, PDPA interplay)
  • Begin mapping AI systems and classifying risk; prepare documentation, testing, and oversight consistent with internationally aligned high‑risk expectations.
  • Implement privacy‑by‑design/default practices and data‑minimization controls, and plan for disclosures and labeling where applicable.
  • Establish vendor management measures and contractual allocation of responsibilities; anticipate audit/verification requirements.
  • Public sector buyers must run risk assessments before using AI; vendors supplying the public sector should be ready with model documentation, testing results, and notices/warnings for high risk use cases.

We will closely monitor the implementation regulations and provide updates in time. If you have any questions, please feel free to let us know.

Explore More Insight