What happened
South Africa's draft National Artificial Intelligence Policy was published for public comment in the Government Gazette on April 10, 2026. The notice said the draft had been approved by Cabinet on March 25, 2026, and invited public comments until June 10, 2026.
Within weeks, the draft was withdrawn. Reuters, published by Polity, reported on April 28 that the policy was pulled after fictitious sources appeared in its reference list and the minister said the most plausible explanation was AI-generated citations that were not properly verified.
APA News reported that internal checks confirmed several citations were fictitious, and that the discovery undermined the credibility of a policy intended to guide national AI governance. MyBroadband also published examples of challenged references, including listed journals and papers that critics said did not exist.
The source-checking lesson
Fictitious citations create a specific review problem. They can look formal, complete, and plausible enough to survive a quick skim. A source-aware workflow has to slow down before publication and ask concrete questions:
- Does every cited work exist at the title, author, journal, volume, issue, and page level?
- Does the cited source actually support the claim beside it?
- Are references circular, broken, or copied from AI-generated summaries?
- Where does the system sound confident while the evidence remains weak?
How FactSentinel frames the check
FactSentinel is not a replacement for policy review, journalism, academic citation checking, or expert judgment. It is a first-pass source-trail workflow for claims that need evidence before they move forward.
For a case like this, the useful output is not a single score. It is a visible trail: the exact claim, source links, reasoning, confidence, caveats, and model agreement or disagreement so a human reviewer can challenge the answer.