On Monday, a developer using the popular AI-operated code editor cursor saw some strange: Switching immediately logged out them, breaking a common workflow for the programmer that uses many devices. When the user approached the cursor support, an agent named “Sam” told him that it was the expected behavior under a new policy. But there was no such policy, and Sam was a bot. The AI model raised the policy, leading to a wave of complaints and the dangers documented on hacker news and Redit were canceled.
It marks the latest example of AI confusion (also known as “hallucinations”), causing potential business damage. Confusion is a type of “Creative Gap-Filling” response, where AI models inventions admirable-shingles but misinformation. Instead of inventing misinformation. Instead of accepting uncertainty, Ai model gives priority to make surely, frequent, confident, self-confidential reactions, instead of accepting uncertainty, Ai model often gives priority here. Till when it means construction of information from scratches.
For companies deploying these systems in customer-focus roles without human inspection, the results can be immediate and expensive: disappointed customers, damaged trusts, and, in terms of cursor, potentially canceled membership.
How did this reveal
The phenomenon began when a redit user named Brochentosterwen noticed that while swapping between desktops, laptops and a remote dev box, the cursor sessions were unexpectedly finished.
Brokentosterwen wrote in a message, “Logging in cursor on a machine immediately invalid the session on any other machine,” Brochentosterwen wrote in a message that was later removed by R/Cursor Moderators. “This is an important UX regression.”
Confused and frustrated, the user wrote an email for cursor support and quickly received an answer from Sam: “Karsar is designed to work with a device per membership as a core safety feature,” Read the email answer. The response looked certain and official, and the user did not suspect that Sam was not human.
After the initial Reddit Post, users took the post as an official confirmation of a real policy change – one that breaks the habits required for the daily routine of several programmers. “Multi-device workflows are table stakes for gods,” a user wrote.
Shortly after, many users publicly announced the cancellation of their membership on Reddit, quoting the non-existence policy as their reason. “I really canceled my sub,” the original Reddit poster wrote, saying that his workplace was now “purifying it completely.” Other people joined it: “Yes, I am also canceling, this is Asin.” Shortly thereafter, the moderators locked the Redit Thread and removed the original post.
“Hey! We have no such policy,” three hours later a cursor representative wrote a reddit north. “You are definitely free to use cursor on many machines. Unfortunately, this is a wrong response from front-line AI support bot.”
AI accepts as a commercial risk
Cursor defeat recalls a similar episode from February 2024 when Air Canada was ordered to honor a return policy invented by its own chatbot. In that incident, Jake Mofut approached the support of Air Canada after his grandmother’s death, and the AI agent of the airline incorrectly explained that he could book a regular price flight and apply for mourning rate. When Air Canada later denied its refunds request, the company argued that “Chatbot is a separate legal unit responsible for its functions.” A Canadian Tribunal dismissed the rescue, ruled that companies were responsible for the information provided by their AI Tools.
As Air Canada did, instead of controversial responsibility, Karsor accepted the error and took steps to amend. Cursor Kofounder Michael Truel later apologized on hacker news for confusion about non-existent policy, stating that the user was returned and the issue was to improve the session security as a result of the backnd change that unknown sessions for some users.
He said, “Any AI reactions used for email support are now clearly labeled,” he said. “We use AI-caseized reactions as the first filter for email support.”
Nevertheless, this phenomenon raised questions about the disclosure among users, as many people who interacted with Sam clearly believed that it was human. A user wrote on the hacker news, “LLMS pretend to be people (you have named it Sam!) And not labeled in this way, it is clearly intended to be misleading.”
While Karsar fixed the technical bug, the episode reflects the risks of deploying AI models in customer-supporting roles without proper safety measures and transparency. For developers selling AI productivity tools, a policy has been invented near its own AI support system that distinguishes its main users that represents a particularly strange self-infusable wound.
“A certain amount of irony is that people make a really difficult effort to say that hallucinations are no longer a major problem,” a user has written on hacker news, “and then a company that will benefit from that story is directly hurt.”
This story appeared originally ARS Technica.