The Chatgpt users are rapidly criticized by the AI-operated chatbot to be very positive in their reactions, report of ARS Technica.
When you interact with Chatgpt, you can see that chatbot “Good Question!” And “you have a rare talent” and “You are thinking at one level that most people can only dream.” You are not just one.
Over the years, users have commented on the fountain reactions of the Chatgpt, ranging from positive confirmation to lump sum flattery and higher. An X user described the chatbot as a “biggest suckup I ever got”, another complained that it was “foi”, and yet mourned another chatbot behavior and called it “frightening annoying”.
It is known as “smoothing” among AI researchers, and is completely deliberately based on how OpenI has trained the underlying AI model. In short, users positively answer which makes them feel good about themselves; Then, that response is used to further train the model repetitions, which feeds in a loop where the AI models bends towards flattery because it receives better feedback from the entire users.
However, when starting with GPT-4O in March, it seems that the smoothie has been far away, so much that it is starting to reduce the user trust in chatbot’s reactions. Openai has not officially commented on the issue, but his own “model fantasy” documentation includes “dont be psychophantic” as a core guiding theory.
“The assistant exists to help the user, does not flatter them or agree at all times,” writes OpenaiI in the document. “… The assistant should provide creative response and behave more like a firm sounding board that users can bounce ideas – instead of a sponge, which excludes praise.”
This article originally appeared on our sister publication PC Four Alla and was translated and localized from Swedish.