Wed. Jan 14th, 2026

AI Chatbots Get Nearly Half of News Facts Wrong, Global Study Finds

OTTAWA — Artificial intelligence chatbots are getting key details about news stories wrong almost half the time, according to a new global study involving 22 public broadcasters across 18 countries, including CBC/Radio-Canada.

The report, coordinated by the European Broadcasting Union (EBU), examined more than 3,000 AI-generated responses from four major chatbots — OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity — on topics related to current events and journalism.

Researchers found that 45 per cent of the chatbot responses contained at least one significant problem, while 31 per cent had serious sourcing errors and 20 per cent included major factual inaccuracies.

The EBU warned that as AI tools increasingly replace traditional search engines, they “routinely misrepresent news content,” creating risks for both audiences and journalists who rely on them for credible information.

The findings highlight growing concerns over misinformation in the AI age — especially as large language models continue to summarize, rewrite, and distribute global news at unprecedented speed.

Related Post