Over 60% of AI chatbot responses are wrong, study finds
The study highlights the significant issue of incorrect or misleading information being delivered by popular AI search tools. These inaccuracies often result in original news sources not being properly credited. Concerns have been raised about the impact on trust in journalism and the financial implications for publishers. The findings suggest a need for improved accuracy and transparency in the use of AI tools for news aggregation.

AI Search Tools Study: Impact on Journalism and Publishers
A new study by Columbia Journalism Review's Tow Center for Digital Journalism reveals that popular AI search tools deliver incorrect or misleading information more than 60% of the time, often failing to properly credit original news sources. The findings raise concerns about how these tools undermine trust in journalism and deprive publishers of traffic and revenue.
Research Overview
Researchers tested eight generative AI chatbots—including ChatGPT, Perplexity, Gemini, and Grok—by asking them to identify the source of 200 excerpts from recent news articles. The results were alarming: over 60% of responses were incorrect, with chatbots frequently inventing headlines, not attributing articles, or citing unauthorized copies of content. Even when chatbots named the right publisher, they often linked to broken URLs, syndicated versions, or unrelated pages.
Confident Errors
The chatbots rarely admitted uncertainty, instead presenting wrong answers with unwarranted confidence. For example, ChatGPT provided incorrect information in 134 out of 200 queries but only expressed doubt 15 times. Premium models like Perplexity Pro ($20/month) and Grok 3 ($40/month) performed worse than free versions, offering more "definitively wrong" answers despite their higher cost.
Syndication and Fabrication Issues
Many chatbots directed users to syndicated articles on platforms like AOL or Yahoo instead of original sources, even when publishers had licensing deals with AI companies. Perplexity Pro cited syndicated versions of Texas Tribune articles despite their partnership, depriving the outlet of proper attribution. Meanwhile, Grok 3 and Gemini often invented URLs: 154 of Grok 3's 200 responses linked to error pages.
Implications for Publishers
The study highlights a growing crisis for news organizations. AI tools increasingly replace traditional search engines, with nearly 25% of Americans using them for information. But unlike Google, which drives traffic to websites, chatbots summarize content without linking back—starving publishers of ad revenue. Misattributions also damage publishers' reputations.
The Road Ahead
According to the research team, when contacted, OpenAI and Microsoft defended their practices but didn't address specific findings. Researchers stress that flawed citation practices are systemic, not isolated to one tool. They urge AI companies to improve transparency, accuracy, and respect for publisher rights. For now, the study signifies a stark reality: as AI reshapes how people access information, news publishers face an uphill battle to protect their content—and their credibility.