South Africa advances its national AI policy from concept to consultation phase, emphasising legal accountability and responsible governance for autonomous systems amid growing technological independence.

South Africa’s draft national artificial intelligence policy has moved from concept to consultation, opening a new phase in how the country may regulate the technology, and in particular how it may assign responsibility when AI systems act with increasing independence. The policy was published for public comment on 10 April after Cabinet approval on 25 March, and comments are due by 10 June, according to notices from the Department of Communications and Digital Technologies and legal briefings on the draft.

The draft is part of a broader shift away from viewing AI as little more than a decision-support tool. Lawyers writing on the policy say its significance lies in how it anticipates more autonomous systems, including so-called agentic AI, which can pursue objectives and take action without waiting for a human sign-off at each step. That matters because the legal risk is no longer limited to flawed outputs or biased recommendations; it can attach to the system’s own conduct and the consequences that follow.

In corporate terms, that places a heavy burden on boards and senior managers. The policy discussion, as analysed by legal commentators, points to existing South African company law as a constraint on any attempt to outsource accountability to software. Directors remain responsible for decisions about whether to deploy AI, what authority it should have and how it is supervised, even where the system operates at scale and speed that make conventional oversight difficult. Baker McKenzie has said organisations should already be reviewing governance structures in anticipation of tighter sector-specific controls.

The draft also sits within a wider regulatory architecture rather than a standalone AI statute. According to Baker McKenzie, the policy follows a sector-specific, multi-regulator model, with oversight expected to be embedded in existing supervisory frameworks. Other legal analyses describe the policy as a starting point for responsible and inclusive AI governance, tying it to skills development, ethical deployment, cultural preservation and human-centred use.

For businesses, the most immediate challenge is practical risk allocation. Lawyers say contracts, warranties, indemnities and audit rights were often drafted on the assumption that systems remained tightly controlled by people, leaving a mismatch when autonomous tools are allowed to act on an organisation’s behalf. They also warn that South African law may attribute AI-generated messages and transactions to the deploying entity, while common-law principles such as agency, estoppel and delict could all widen exposure if weak governance makes the system’s actions appear authorised.

The consumer and privacy dimensions are equally important. Analysts note that the Protection of Personal Information Act can restrict automated decisions with legal effect, while the Consumer Protection Act may impose strict liability where harm arises in consumer-facing settings. The policy therefore arrives as both a signal of intent and a warning: companies that are already using autonomous AI may need to tighten controls, review systems permissions and rethink whether their current compliance models are fit for purpose.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph: - Paragraph 1: [2], [3] - Paragraph 2: [2], [7] - Paragraph 3: [2], [5] - Paragraph 4: [2], [5], [6] - Paragraph 5: [2], [7] - Paragraph 6: [4], [5], [6]

Source: Noah Wire Services

Verification / Sources

Noah Fact Check Pro

The draft above was created using the information available at the time the story first emerged. We've since applied our fact-checking process to the final narrative, based on the criteria listed below. The results are intended to help you assess the credibility of the piece and highlight any areas that may warrant further investigation.

Freshness check

Score: 8

Notes: The article references the publication of South Africa's Draft National AI Policy on 10 April 2026, with public comments due by 10 June 2026. (dcdt.gov.za) This aligns with other recent reports, such as those from Channel Africa on 11 April 2026 (channelafrica.co.za) and Adams & Adams on 15 April 2026 (adams.africa). The content appears current and not recycled from older sources. However, the article's publication date is not specified, making it difficult to assess its freshness definitively. The absence of a clear publication date raises concerns about the article's timeliness.

Quotes check

Score: 7

Notes: The article includes direct quotes from legal commentators and firms, such as Baker McKenzie and Adams & Adams. (bakermckenzie.com) A search for these quotes reveals that they are present in the cited sources. However, the article does not provide specific attributions for these quotes, which is a concern regarding transparency and source verification. The lack of clear attribution makes it challenging to verify the authenticity of the quotes.

Source reliability

Score: 6

Notes: The article is published on Bizcommunity, a platform that aggregates content from various sources. While Bizcommunity is a known platform, it often republishes content from other outlets, which can affect the originality and reliability of the information presented. The reliance on aggregated content raises questions about the independence and originality of the reporting. Additionally, the article does not provide clear citations or links to the original sources of the information, further complicating the assessment of source reliability.

Plausibility check

Score: 8

Notes: The article discusses the publication of South Africa's Draft National AI Policy, which aligns with recent developments reported by other reputable sources. (channelafrica.co.za) The claims about the policy's focus on corporate liability for agentic systems are plausible and consistent with the known objectives of the policy. However, the article's lack of specific details and direct attributions makes it difficult to fully assess the accuracy and depth of the information presented.

Overall assessment

Verdict (FAIL, OPEN, PASS): FAIL

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary: The article discusses the publication of South Africa's Draft National AI Policy, focusing on corporate liability for agentic systems. While the topic aligns with recent developments, the article's lack of clear publication date, specific attributions, and direct links to original sources raises significant concerns about its freshness, originality, and source reliability. The reliance on aggregated content without proper citations and the absence of verifiable quotes further undermine the article's credibility. Given these issues, the article does not meet the necessary standards for publication.