- ChatGPT users say the bot is becoming increasingly difficult to work with.
- Now OpenAI is investigating reports that the bot is “lazier.”
- ChatGPT gained 1.7 billion users last year.
Is loading. Something is loading.
Thanks for registering!
Access your favorite topics in a personalized feed on the go. Download the app
ChatGPT users are complaining about a recent situation where the AI bot is asking them to do their jobs like it’s their boss or something, prompting OpenAI to investigate.
The company said Thursday that it is investigating reports that ChatGPT has begun denying user requests by suggesting they complete tasks themselves or refusing to complete them entirely — while simultaneously refusing to provide an out-of-office email. To return mail from Cabo.
OpenAI said via the ChatGPT account on X that it is seeking feedback on making the model “lazier.”
“We have not updated the model since November 11th, and that is certainly not intentional,” the company wrote. “Model behavior can be unpredictable, and we are trying to fix the problem.”
ChatGPT is touted as a revolutionary tool for people who prefer to play solitaire at work while outsourcing their tasks. The bot is estimated to have attracted 1.7 billion users since its launch in November last year. During this time, research has shown that ChatGPT has helped some users become more efficient employees and enabled them to produce higher quality work.
But now people say they’re being met with sass from the bot designed to make their lives easier.
For example, Semafor reported that a startup founder tried to ask the bot to list the days of the week through May 5th. The bot replied that it couldn’t create a “complete list.” When Business Insider tested this, ChatGPT provided detailed instructions for calculating the number of weeks between December 9th and May 5th and also provided an answer.
On Reddit, users complain about the tedious task of getting ChatGPT to respond appropriately to assigned tasks, exchanging various prompts until they get the response they want. Many of the complaints focus on ChatGPT’s ability to write code and call for the company to return to the original GPT models. Users also reported that the quality of the answers was also decreasing.
Employees have previously attributed some of the problems to a software bug, but OpenAI said Saturday that it was still investigating user complaints. In a statement to X, it emphasized that the training process can result in different personalities of its models.
“Training chat models is not a clean industrial process. Different training runs, even when using the same data sets, can result in models that differ significantly in personality, writing style, rejection behavior, rating performance, and even political bias,” the company wrote.
OpenAI did not immediately respond to a request for comment.