In brief

In early 2026, The Information and Privacy Commissioner of Ontario (IPC) released AI Scribes: Key Considerations for the Health Sector (Guidance”), a detailed guidance addressing the responsible development, procurement, and use of AI scribe tools. AI scribes are software applications that use speech recognition and natural language processing to capture clinical conversations and generate corresponding notes. In practice, they function as automated transcription tools capable of producing structured summaries of care visits for entry into electronic records.

Although directed at Personal Health Information Protection Act, 2004 (PHIPA) regulated health custodians (a person or organization identified in PHIPA that has custody or control of personal health information) the Guidance contains governance, risk management, procurement, and oversight expectations that may reflect a growing regulatory trend toward operational governance expectations rather than purely high level principles. In particular, the Guidance aligns with published principles reflected in the federal government’s “Voluntary Code of Conduct for generative AI” and the IPC’s earlier published “Principles for the Responsible Use of Artificial Intelligence”. While the Guidance does not create any obligations or duties for the private sector, it provides practical indicators of the type of controls Canadian regulators may increasingly expect in practice.

In depth

Why the private sector should care

Canada’s legal patchwork. Canada does not yet have a comprehensive AI law. The proposed Artificial Intelligence and Data Act did not proceed following the most recent prorogation of Parliament, while provinces continued to advance their own approaches. For example, Quebec’s Law 25 imposes AI-adjacent duties through enhanced privacy rules and transparency requirements for automated decisions, while Ontario has added AI-transparency requirements in employment law (job postings must disclose AI use in screening, assessment, or selection). Federal and provincial governments have issued high-level guidance for responsible use, but in the absence of a comprehensive framework, organizations must navigate a patchwork of guidance and emerging provincial rules.

Why the IPC Guidance matters now. Against this backdrop, the IPC’s publication is notable: it is one of Ontario’s first detailed, sector-specific AI governance statements and offers an operational model of “responsible AI” that regulators may look to when assessing practices.

AI governance inclusions to consider

Section three of the Guidance describes the IPC’s approach to developing an AI governance and accountability framework. It emphasizes that AI governance should be embedded within organizational structures and applied across the entire AI lifecycle, including design, procurement, deployment, monitoring, and eventual decommissioning. Below is an outline of IPC’s expectations and an assessment of their broader applicability.

  1. Governance structure and risk management
  • Establish an AI governance committee and risk-management framework. Maintain a dedicated committee with clear accountability for AI oversight, including authority to approve deployments, pause or modify systems as needed, and supervise ongoing monitoring. While private-sector entities are not required to replicate the Guidance, Canadian AI and private-sector privacy principles consistently emphasize having strong internal accountability bodies.
  • Conduct Privacy Impact Assessments (PIAs). Complete PIAs before introducing any AI system that handles personal information and update them throughout the system lifecycle. This aligns with existing private-sector practices, where PIAs may be undertaken for projects involving personal information.
  1. Data and purpose controls
  • Apply data minimization and purpose limitations. Under PHIPA, custodians may collect only what is reasonably necessary; similar limiting purpose principles apply under private-sector privacy laws.
  1. Policies, procedures and documentation
  • Maintain written policies, practices, and procedures. Keep clear general and AI-specific documentation identifying authorized AI systems, internally approved use cases or deployment conditions, what constitutes a privacy or data breach, incident-response steps, transparency requirements, and security safeguards proportionate to AI-related risks. This type of practice is not new; it aligns with accountability practices under private-sector privacy laws that organizations should be familiar with.
  • Confidentiality and end-user agreements. Require relevant persons to sign (and periodically renew) confidentiality, acceptable-use, and end-user agreements before accessing personal information or AI-enabled tools. This parallels private-sector privacy law expectations to use contractual or other means to ensure comparable protection when third parties process personal information.
  1. People, training and oversight
  • Provide training and awareness. Training should address bias risks, safeguards, and the human responsibility to review AI outputs. While there are no equivalent obligations for the private sector only guidance at present, this reflects global practice (such as in the EU) and signals likely direction.
  • Human oversight, reporting, and notification mechanisms. The Guidance stresses trained, well-resourced, and independent humans-in-the-loop. It also calls for clear processes to report non-compliance, errors, bias, discrimination, or other harms, with protections against reprisal, and for vendors to promptly notify custodians of unexpected system behaviour. These expectations may be more resource-intensive, particularly for organizations adopting AI hoping to reduce operational burdens.
  • Inquiry and complaint mechanisms. Offer accessible channels for individuals to ask questions, raise concerns, or challenge outcomes influenced by AI systems. Although the Guidance does not apply to the private-sector, similar inquiry and complaint mechanisms already exist under private sector privacy laws (e.g., responses to access requests and privacy complaints), and the Guidance's approach can serve as an example of how these may evolve as AI becomes more common.

 

Key takeaway

The IPC’s Guidance does not create obligations for the private sector, but it offers a practical foundation for organizations looking to implement or strengthen their AI governance and accountability frameworks ahead of potential future Canadian requirements. While not prescriptive, it reflects the kinds of practices (many of which mirror privacy practices) regulators may increasingly expect as AI governance evolves and can help organizations benchmark existing policies, identify gaps, and prioritize next steps.

Explore More Insight