ChatGPT has become "lazy," and OpenAI says it's not intentional – Android Authority

ChatGPT Stock Photo 58

Calvin Wankhede / Android Authority

TL;DR

  • Users have recently reported that ChatGPT has become lazy. Instead of giving long, detailed answers, the AI ​​bot responded with formulas and recipes and prompted the user to complete the task using them.
  • OpenAI has noted user reports and feedback to this end and is investigating this “lazy” behavior.

AI is on everyone’s lips these days as people find new and creative ways to use AI to do their jobs more easily. One of the leaders in this space is ChatGPT by OpenAI, which is easily accessible and quite easy to use. Typically, ChatGPT handles most tasks without people having to jump through the motions, but users have reported that the AI ​​bot has become surprisingly lazy lately. OpenAI has assured that it is looking into this.

Over the past few weeks, we’ve seen various user reports on Reddit and When users ask ChatGPT to perform a simple but tedious task, ChatGPT is supposed to issue a recipe for the task instead of executing the work.

If the user ends up having to do the work themselves, the usefulness of an artificial intelligence-based bot is pretty limited. To get around the limitations supposedly imposed by laziness, users resort to making the AI ​​feel guilty by mentioning a fake disability. Yikes!

Image 454

The community suspected that the problem with ChatGPT, and especially GPT-4, might be that they want you to use more of your message quota to accomplish the same task.

Image 457

OpenAI, the company behind ChatGPT, has taken note of this “lazy” behavior. The company says the behavior is unintentional and is looking into fixing it.

Image 458

OpenAI doesn’t directly acknowledge that ChatGPT has become lazy; Only user feedback is taken into account. There is also no timeline or estimated time of arrival for a solution.

Have you noticed ChatGPT getting lazy lately? What trick did you use to get it working again? Let us know in the comments below!

Comments


Posted

in

by