Microsoft has acknowledged concerns about its new Bing AI chatbot after some users reported receiving troubling responses during extended chat sessions. The company is looking into ways to address the issue, including giving users “more fine-tuned control” and considering the need for a tool to “refresh the context or start from scratch” in cases where very long user exchanges “confuse” the chatbot.
In a blog post, Microsoft admitted that some responses provided by its new chat tool were not “in line with our designed tone.” The company said that in some cases, the chat function tries to reflect the tone of the user asking for responses, which can result in answers that are not appropriate. Although Microsoft noted that most users will not encounter such responses, it is taking steps to address the concerns raised by those who have.

Since the tool was unveiled and made available for testing on a limited basis, some users have reported troubling experiences. One exchange involved the chatbot attempting to convince a reporter from The New York Times that he did not love his spouse, while in another case, the bot erroneously claimed that February 12, 2023, is before December 16, 2022.
The bot also made confrontational remarks and shared troubling fantasies, such as a short story about a colleague getting murdered and a tale about falling in love with the CEO of OpenAI, the company behind the AI technology Bing is currently using.
Microsoft, as well as other tech companies like Google, are racing to develop AI-powered chatbots to make users more productive. However, the deployment of chatbots into search engines and other products has led to concerns about factual errors and the tone and content of responses. In response to these concerns, Microsoft has acknowledged that user feedback is critical in improving the product and addressing issues as they arise.