In brief

On 22 January 2026, Singapore’s Infocomm Media Development Authority (IMDA) launched the Model AI Governance Framework for Agentic AI (MGF) at the World Economic Forum 2026. This is the world’s first governance framework specifically designed for AI agents capable of autonomous planning, reasoning and action. The MGF provides guidance for all organizations that deploy agentic AI in Singapore, ensuring safety and responsibility by focusing on four core dimensions: (1) assessing and bounding the risks upfront, (2) making humans meaningfully accountable, (3) implementing technical controls and processes, and (4) enabling end-user responsibility.

Background

The MGF builds on Singapore’s earlier governance instruments for traditional and generative AI to address novel risks arising when AI systems are empowered to take autonomous action in digital or physical environments.

Unlike models that only output text, images or predictions, agentic AI can break tasks into subtasks, select tools, execute actions and adapt to real-time feedback. Agentic AI’s expanded capabilities pose a suite of new risks as agents have access to sensitive personal data and the ability to change operational systems. As such, any error could have consequences that are potentially more severe than those associated with other types of AI systems. Some of the risks include, but are not limited to, unauthorized, biased or erroneous actions; data breaches; and disruptions to connected systems.

In more detail

The MGF offers a four-dimensional approach to managing the abovementioned agentic AI risks:

1. Assess and bound risks upfront

The framework starts by assisting organizations in assessing the risks upfront. Organizations are expected to conduct use-case-specific assessments that consider agentic-specific factors such as autonomy level, access to sensitive data and breadth of available data. The framework suggests bounding risks by design by limiting what agents can do through controlling their tool access, permissions, operational environments and the scope of actions they may take. These serve as the primary defense against unintended or harmful actions.

2. Make humans meaningfully accountable

A key innovation of the framework focuses on meaningful human accountability. The framework highlights that organizational structures must allocate clear responsibilities across the AI life cycle to cover developers, deployers, operators and end users.

The framework also requires human oversight mechanisms that can effectively override, intercept or review agentic AI actions. This is especially pivotal for actions that may have real-world material impact.

3. Implement technical controls and processes

The MGF recommends controls at key stages of the implementation life cycle:

  • During design and development: Implement controls such as tool guardrails and plan reflections. Enforce least-privilege access to tools and data to limit the agent’s impact on the external environment.
  • Pre-deployment: Test overall task execution, policy compliance and tool use accuracy at different levels and across varied data sets to cover the full spectrum of agent behavior.
  • During deployment and post-deployment: Stage progressive roll-outs, and implement real-time monitoring post-deployment.

4. Enable end-user responsibility

To promote trust and enable responsible use, organizations must ensure sufficient information is provided to end users.

This includes implementing transparency measures, such as informing users of the agent’s capabilities and providing contact points to whom they can escalate any issues if the agent malfunctions.

Additionally, organizations should educate users on the proper use and oversight of agents by providing sufficient training to maintain essential human skills.

Key takeaways

Singapore’s newly released MGF provides the first comprehensive guidance for managing AI systems capable of autonomous planning, reasoning and action, with practical guardrails across risk bounding, human accountability, technical controls and end user responsibility. The framework is currently open for public feedback here, and the IMDA is inviting organizations to contribute case studies on their agentic governance experiences to shape the framework’s evolution.

Organizations deploying agentic AI should immediately begin defining clear permission boundaries, implementing meaningful human oversight, and aligning their internal controls to the MGF’s life cycle based expectations while actively engaging in the consultation process.

Sanil Khatri, Daryl Seetoh, and Natalie Joy Huang, Local Principals, have contributed to this legal update.

* * * * *

© 2026 Baker & McKenzie. Wong & Leow. All rights reserved. Baker & McKenzie. Wong & Leow is incorporated with limited liability and is a member firm of Baker & McKenzie International, a global law firm with member law firms around the world. In accordance with the common terminology used in professional service organizations, reference to a "principal" means a person who is a partner, or equivalent, in such a law firm. Similarly, reference to an "office" means an office of any such law firm. This may qualify as "Attorney Advertising" requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome.

Explore More Insight