In brief

On 10 March 2026, the Ministry of Health (MOH) and Health Sciences Authority (HSA) published a refreshed AI in Healthcare Guidelines ("AIHGle 2.0"). Building on the previous guidelines issued in 2021, AIHGle 2.0 is intended to provide practical guidance to support the safe development, deployment and use of AI in healthcare, benefitting patients and improving trust, and complementing other guidelines in the healthcare industry such as HSA’s regulatory guidelines for software as medical devices ("SaMD Guidelines").

The key updates in AIHGle 2.0 include the strengthening of accountability through clarity of responsibilities for healthcare AI developers (e.g., manufacturers), deployers (i.e. healthcare organizations) and users (i.e., healthcare professionals; improving trust via guidance on transparency to facilitate informed decision-making; and updated guidance on AI deployment, such as assessing and mitigating risks.

In more detail

Through AIHGle 2.0, MOH and HSA aim to ensure patient safety and enhance trust in the use of AI in healthcare, providing a consolidated set of recommendations and best practices for healthcare AI developers (e.g., manufacturers), deployers (i.e., healthcare organizations) and users (i.e., healthcare professionals).

Although AIHGle 2.0 is broadly applicable to all AI, it targets the more complex subset of AI solutions that employ machine learning and deep learning algorithms, as these have amplified risks due to their complexity, opacity and scalability.

AIHGle 2.0 primarily targets two categories of AI use cases:

  • Clinical: Where AI supports clinical decisions or judgment and has direct impact on patient care outcomes. Clinical decisions include diagnosis, monitoring, treatment or alleviation of a medical condition. This includes AI software used to support doctors in identifying suspicious areas for cancer from chest X-ray images and making a diagnosis and AI software used to support pharmacists in risk stratification of patients to determine appropriate levels of pharmaceutical care.
  • Clinical-Ops: Where AI is part of a clinical workflow but does not directly impact clinical decision or judgment. This includes AI software used to transcribe doctor’s consultation from speech to-text and summarize into case notes for doctors’ review and AI software used to conduct medication counselling and provide standardized education on medication use.

AIHGle 2.0 recognizes and gives effect to seven core ethical principles central to AI governance: safety, fairness, transparency, explainability, robustness, security and data protection, and AI alignment to human values or goals. These principles align with medical ethics and professional codes of conduct, underpinning AIHGle’s recommendations and scoping the responsibilities of stakeholders. AIHGle 2.0 helpfully sheds light on how these principles are applicable to healthcare AI developers, deployers and users, hence providing specificity on how general AI principles and best practices are applied in the healthcare context.

There are also recommendations on how clear responsibilities should be set between healthcare AI developers, deployers and users across the AI lifecycle. In particular, each stakeholder’s specific roles and responsibilities should be clearly formalized and documented for accountability, transparency and for comprehensive risk mitigation.

AIHGle 2.0 also draws healthcare AI developers’ attention to potentially applicable medical device regulatory requirements as per HSA’s SaMD Guidelines. It provides examples of when an AI as a medical device (AI-SaMD) is considered a medical device.

Finally, there is a section on emerging developments in the healthcare AI space, covering risk mitigation strategies that organizations can take, generative AI and direct-to-consumer AI applications. This shows that MOH and HSA are closely monitoring developments in the space and may continue updating its guidelines for the guidelines to maintain relevance.

Key takeaways

The Singapore government and regulators’ updating of the AI in healthcare guidelines illustrate that they are closely monitoring how AI is being developed, deployed and used in the healthcare industry. Through the guidelines, they have encouraged Singapore healthcare AI players to adopt industry best principles to support the responsible use of AI in healthcare.

AI deployers, developers and users should familiarize themselves with the principles set forth in AIHGle 2.0 and comply with recommended best practices. It would be prudent to proactively review your organization's AI tools, governance frameworks and internal processes against such industry best practices.

Further, MOH has stated that AIHGle 2.0 complements HSA’s regulatory guidelines for software as medical devices. There may therefore be regulatory considerations on how AIHGle 2.0 interplays with HSA’s SaMD Guidelines which organisations should be aware of given the regulators’ scrutiny in this space.

*****

© 2026 Baker & McKenzie. Wong & Leow. All rights reserved. Baker & McKenzie. Wong & Leow is incorporated with limited liability and is a member firm of Baker & McKenzie International, a global law firm with member law firms around the world. In accordance with the common terminology used in professional service organizations, reference to a "principal" means a person who is a partner, or equivalent, in such a law firm. Similarly, reference to an "office" means an office of any such law firm. This may qualify as "Attorney Advertising" requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome.

Explore More Insight