In brief

South Africa's Draft National AI Policy has formally entered the Cabinet approval process, signalling a decisive shift from high‑level principles to concrete regulatory development. The Policy is expected to be gazetted for a 60‑day public consultation in March 2026, with finalisation targeted for the 2026/2027 financial year. Government has opted for a sector‑specific, multi‑regulator model rather than a single AI regulator, meaning AI governance will be embedded within existing supervisory frameworks. Five core pillars – skills capacity, responsible governance, ethical and inclusive AI, cultural preservation and human‑centred deployment – guide the Policy. Organisations should assess current AI deployments, governance structures and high‑impact systems ahead of increased oversight and future sector‑specific regulatory instruments.

In more detail

The Road to Gazetting: Timelines and Procedures

On 24 February 2026, the Department of Communications and Digital Technologies (DCDT) briefed Parliament on the progress of the Draft National AI policy ("Draft AI Policy"). The DCDT's briefing confirmed that the Draft AI Policy has successfully cleared the Socio-Economic Impact Assessment System certification and achieved concurrence across all Director-General clusters. This is a critical administrative hurdle; it suggests the Policy has broad inter-departmental support and is ready for public scrutiny.

The Draft AI Policy is now progressing through Cabinet approval and is expected to be gazetted for a 60-day public consultation period in March 2026. Finalisation is anticipated during the 2026/2027 financial year, with sector-specific strategies and supporting regulatory measures expected to follow from 2027/2028.

Strategic Context of the Draft AI Policy

A key focus of the Parliamentary briefing was on the benefits and risks of AI and how each of these should be more evenly distributed across society. Government highlighted concerns about the current concentration of AI capabilities and emphasised the need to ensure long-term benefits for both current and future generations.

AI was also framed as a tool to support inclusive economic growth. The Draft AI Policy aims to strengthen Government's ability to regulate and adopt AI responsibly, while encouraging local innovation, supporting job creation and improving access to AI skills, infrastructure and services. These priorities sit alongside efforts to enhance South Africa's global competitiveness and international collaboration. 

Five Core AI Policy Pillars 

It was against this backdrop that the following five core policy pillars underpinning the Draft AI Policy were introduced by the Government during the Parliamentary briefing.

  • Capacity and talent development

The DCDT highlighted the need to build national AI skills through education, training and closer collaboration with industry. This will be supported by improved digital infrastructure, including increased compute capacity (such as graphics processing units) and better connectivity, to support local innovation, SMEs and broader economic growth.

  • Responsible AI governance

The Draft AI Policy proposes practical safeguards to address safety, security and privacy risks associated with AI. These include risks such as data misuse, cybersecurity threats, misinformation and deepfakes, with the aim of ensuring that AI systems are deployed with clear accountability and do not cause harm.

  • Ethical and inclusive AI

Fairness and bias mitigation were central to the discussion. Government highlighted the importance of training AI systems on representative local datasets to avoid imported bias (i.e., discriminatory outcomes when AI models trained on Global North datasets are applied to South African demographics). The development of national ethical guidelines and mechanisms to hold developers accountable for harmful outcomes was also emphasised.

  • Cultural preservation and global integration

The Draft AI Policy aims to use AI to preserve indigenous languages and knowledge systems while strengthening international collaboration. In this regard, the Koi and Satsang languages were specifically mentioned with the intention of AI digitising them. Government also noted the importance of protecting cultural assets while ensuring South Africa remains globally competitive.

  • Human-centred deployment

The Draft AI Policy rejects opaque "black box" deployment in high-impact contexts and emphasises accountability when AI systems cause harm. This means that This means that organisations remain responsible for the outcomes of AI-driven decisions, regardless of the level of automation involved.

Implementation model and sector-specific regulation

The DCDT also addressed the intended implementation plan of the Draft AI Policy in practice. Government indicated that a phased approach will be adopted, recognising that AI deployment and risk profiles differ significantly across sectors. AI in healthcare, for example, presents distinct ethical and safety considerations when compared to AI used in financial services, telecommunications or public administration.

As a result, the Draft AI Policy is expected to operate as an overarching framework, with sector-specific strategies developed thereafter and governed within existing sector-specific regulatory frameworks. This is an approach favoured in other countries such as the United Kingdom and India, and signals another departure from the EU AI Act's comprehensive regulatory approach. These strategies will be tailored to the regulatory realities and risk exposures of individual industries.

Perhaps the most significant structural revelation from the DCDT is the decision not to create a single AI Regulator. Instead, oversight will be distributed among existing authorities – ICASA was specifically mentioned during the briefing as one of the authorities expected to play a role in relation to digital infrastructure and communications aspects of AI deployment.

This multi-regulator model represents a coordinated oversight approach rather than the creation of a centralised AI regulator. As a result, AI governance is likely to intersect with existing regulatory obligations – such as those relating to conduct, risk management, data protection, and cybersecurity – embedding AI accountability within established supervisory frameworks rather than introducing it through a standalone regime.

The way forward

With the Draft AI Policy expected to enter public consultation shortly, organisations should treat 2026 as a period to prepare strategically.

In the immediate term, institutions should monitor the gazetting process and consider participating in the 60-day public comment period. Early engagement may influence how sector-specific strategies are ultimately framed, particularly in relation to explainability and supervisory oversight.

At an operational level, institutions should conduct a comprehensive internal review of existing AI deployments. This includes identifying high-impact systems, mapping data flows, assessing model explainability mechanisms and evaluating alignment with existing governance frameworks and recommendations (such as those under King V) and obligations under existing regulation, such as POPIA, prudential standards and conduct regulation. As governance expectations evolve, regulators are likely to assess AI through the lens of existing accountability frameworks rather than in isolation.

The 2026 Draft National AI Policy represents an effort by the South African Government to catch up with global regulatory trends while addressing local socio-economic realities. The transition from the current open regulatory environment to a sector-specific framework will be complex, but it also offers an opportunity for businesses to build greater trust with their customers through proactive engagement, as the approach towards governing AI in South Africa is mapped.

* * * * *

Despina Lazanakis, Trainee Solicitor, has contributed to this legal update.

Explore More Insight