The Implications of South Africa’s AI Hallucination Scandal on Government Policy

South Africa’s government is in crisis after fake AI-generated references were discovered in major policy documents, including the Revised White Paper on Immigration and the Draft National AI Policy. Home Affairs has suspended two senior directors, while DA ministers have been ordered to urgently verify all AI-assisted work. A major embarrassment that exposes the dangers of relying on unverified chatbot content in official policy.

Dwayne Krummeck

5/1/20263 min read

Introduction to the Scandal In a profound embarrassment for the South African government, the Department of Home Affairs has been thrust into the spotlight after investigators uncovered widespread AI-generated “hallucinations” in a major policy document. The scandal, which surfaced on 30 April 2026, has led to the precautionary suspension of two senior directors and raised serious questions about how artificial intelligence is being used in the drafting of official government papers. What began as a routine review quickly escalated into a full-blown credibility crisis, highlighting the very real dangers of relying too heavily on generative AI tools without proper human oversight.

Details of the Findings At the heart of the controversy is the Revised White Paper on Citizenship, Immigration and Refugee Protection – a flagship reform document actively championed by Home Affairs Minister Leon Schreiber. An independent investigation by journalists and researchers revealed that a staggering 102 out of 148 references in the paper were either completely fabricated or unverifiable. These included citations to academic journals that do not exist, nonexistent scholarly articles, and authors who had never written the quoted material.

Critically, the Department of Home Affairs has clarified that the fake references were added only to the bibliography after the main body of the text was drafted and were not directly cited within the document itself. Nevertheless, the sheer volume of problematic material – nearly 70 percent of the references – has damaged the document’s integrity and cast doubt on the entire drafting process.

The Government’s Response Minister Schreiber moved quickly. The two officials involved – a Chief Director and a Director – have been placed on precautionary suspension pending further investigation. In his additional role as the Democratic Alliance’s coordinator in the national executive, Schreiber has instructed all DA ministers to immediately introduce strict human verification and fact-checking protocols for any AI-assisted drafting or research before documents are submitted to Cabinet or released publicly.

The department has also appointed independent law firms to conduct a thorough review of the affected white paper. Home Affairs maintains that the core content of the document remains sound, but the reputational harm is undeniable. This white paper was positioned as a significant overhaul of South Africa’s immigration, citizenship and refugee protection framework – an area already under intense public and political scrutiny.

The Striking Irony and Wider Fallout The scandal does not stop at Home Affairs. Just days earlier, the Department of Communications and Digital Technologies was forced to withdraw its Draft National AI Policy from public comment after similar fabricated references were discovered. Minister Solly Malatsi described the lapse as “unacceptable” and noted that it severely undermined the document’s credibility. The irony is impossible to ignore: a national policy designed to regulate and govern artificial intelligence was itself compromised by the very technology it sought to manage.

Wider Implications for AI Usage This episode has ignited intense debate across government and civil society about the risks of over-reliance on tools like ChatGPT and other large language models. AI “hallucinations” – where systems generate confident but entirely false information – are a well-known limitation of current generative technology. Yet when these hallucinations slip into official policy documents, the consequences go far beyond embarrassment. They threaten public trust, undermine evidence-based policymaking, and could potentially lead to flawed laws or regulations based on invented sources.

Experts point out that South Africa is not alone. Courts, universities, law firms and governments worldwide have faced similar incidents involving fabricated legal citations and academic references. In South Africa’s case, the timing is particularly awkward. The country is actively trying to position itself as a continental leader in AI development and innovation, yet this scandal exposes critical gaps in internal controls and verification processes.

Conclusion: A Cautionary Tale As the story continues to unfold, more government departments are expected to conduct urgent reviews of their recent policy documents. The scandal serves as a stark reminder that while artificial intelligence offers powerful tools for research and drafting, it cannot replace rigorous human oversight, fact-checking and accountability.

For South Africa to harness the benefits of AI without falling victim to its pitfalls, stronger safeguards – including mandatory verification protocols, transparency requirements, and staff training – will be essential. In the end, this embarrassing episode may prove to be a valuable, if costly, lesson in balancing technological ambition with institutional integrity. The government now has an opportunity to turn this crisis into a catalyst for meaningful reform in how it develops national policy in the AI era.