Deloitte faces heat again over a healthcare report the company prepared for the government of Newfoundland and Labrador that reportedly contained AI-generated factual errors.
According to a report by Canadian outlet The Independent, as cited by Hindustan Times, the $1.6 million report, commissioned to assess the province’s healthcare services, mistakenly identified hospitals, carried incorrect descriptions of facilities, and included references that apparently do not exist.
This is the second time in recent months that the consulting giant is involved in controversy over allegedly AI-generated content in official reports.
Observers poring over the Canadian report said portions of its text appeared to be generated using AI. The absence of human validation was pointed out, leading to doubts about the quality control practices of Deloitte.
These findings were meant to help set policy directions for a provincial healthcare system struggling to cope with shortages and the aftermath of the pandemic.
The controversy has reopened debate over the increasing use of generative AI in sensitive public-sector work. Globally, governments and corporations are rapidly adopting automated tools to cut costs and accelerate analysis. However, experts say Deloitte’s repeated errors show how unchecked AI use undermines credibility and public trust.
Deloitte Canada denied the charges, insisting it “fully stands behind the recommendations” in the report. The firm conceded there were “a small number of citation corrections” but said AI was used only to support limited parts of the research process, not actually to write the report. It also argued that the corrections did not affect the overall findings of the report.
However, the criticism echoes an incident earlier this year in Australia. In August, Deloitte refunded AUD 440,000 to the Australian government after similar mistakes, including non-existent references and a fabricated court quote, were found in a welfare-system review the firm produced. The report was also flagged for suspected AI-generated text.
Also Read: Can AI Be Trusted for News Reporting? Study Finds 45% of Responses Misleading
The pair of controversies has raised concerns, emphasizing the need for increased scrutiny of consultants working on taxpayer-funded projects, especially those involving the use of AI tools.
As agencies in the public sector grow more dependent on outside help-and that outside help grows increasingly dependent on the room for error contracts and the stakes of accountability rise exponentially.
Will this incident finally prompt governments worldwide to rethink how AI is overseen in critical public-policy work?