Untruths spouted by chatbots ended up on the web—and Microsoft’s Bing search engine served them up as facts. Generative AI could make search harder to trust.
Untruths spouted by chatbots ended up on the web—and Microsoft’s Bing search engine served them up as facts. Generative AI could make search harder to trust.
Quoting the article: “…Although WIRED could initially replicate the troubling Bing result, it now appears to have been resolved…”.
Most of the web-search-capable bots I use (fastgpt, bing chat, poe web-search) correctly refuse to quote the published LLM-hallucinated info. It can still be reproduced on perplexity ai.
This seems to be much less of an issue than recent publications make it out to be, mostly because all the companies behind those bots are aware and actively addressing it, I guess.