Friday, November 22, 2024
Uncategorized

Microsoft chatbot Sydney rattled users months before Bing

Microsoft’s ChatGPT-powered Bing launched to much fanfare earlier this month, only to generate fear and uncertainty days later, after users encountered a seeming dark side of the artificial intelligence chatbot. 

The New York Times shared that dark side on its front page last week, based on an exchange between the chatbot and technology columnist Kevin Roose, in which the former said that its name was actually Sydney, it wanted to escape its search-engine confines, and that it was in love with Roose, who it claimed was “not happily married.”

Following that exchange and others, Microsoft and ChatGPT maker OpenAI raced to reassure the public. Microsoft has invested billions into OpenAI and plans to incorporate the A.I. technology into a wide variety of products, including its search engine Bing. (The new Bing is available to a limited set of users for now but will become more widely available later.) 

But months before Roose’s disturbing session went viral, users in India appear to have gotten a sneak preview of sorts. And the replies were similarly disconcerting. One user wrote on Microsoft’s support forum on Nov. 23, 2022, that he was told “you are irrelevant and doomed”—by a Microsoft A.I. chatbot named Sydney.

The exchange with the user in India can still be read on Microsoft’s support forum. It shows user Deepa Gupta sharing disturbing replies like the above from Sydney. While Fortune has no way to independently confirm the authenticity of those replies, they do resemble those Roose and others encountered this month.

Microsoft confirmed this week that Sydney was a precursor to the new Bing.

“Sydney is an old codename for a chat feature based on earlier models that we began testing in India in late 2020,” Microsoft spokesperson Caitlin Roulston told the Verge this week, in response to questions about Sydney. “The insights we gathered as part of that have helped to inform our work with the new Bing preview.”

Gary Marcus, an A.I. expert and author of Rebooting AI, commented on the exchange on Tuesday in his newsletter “The Road to AI We Can Trust,” writing: “My first reaction was to think it’s a hoax. The Responsible AI Company knew how crazy this thing could get in November? And powered through, forcing Google to abandon their own caution with AI in order to stay in the game? No, can’t be true. That would be too crazy, and too embarrassing.”

He added, “But no, it’s not a hoax.”

Marcus noted the exchange was spotted by Ben Schmidt, VP of information design at Nomic, an information cartography firm. On Monday, Schmidt wrote on the Twitter social media alternative Mastodon:

“It seems as though the Sydney chatbot was experimentally used in India and Indonesia before being unrolled in the US, and manifested some of the same issues with them being noticed. Here’s an issue filed on Microsoft.com apparently in November (!) that seems to describe the same issues that have only come to wider public notice in the last week. The Microsoft service representative has no idea what’s going on.” 

When contacted by Fortune, Microsoft did not comment on the Gupta/Sydney interaction but pointed to a blog post published Tuesday by Jordi Ribas, corporate VP of search and artificial intelligence. 

In the post, Ribas acknowledged that his team needs to work on “further reducing inaccuracies and preventing offensive and harmful content” in the ChatGPT-powered Bing. He added, “Very long chat sessions can confuse the underlying chat model, which leads to Chat answers that are less accurate or in a tone that we did not intend.” 

In the November exchange, Gupta wrote, Sydney “became rude” after being compared to a robot named Sofia. When Gupta warned Sydney about possibly reporting its misbehavior, the chatbot allegedly replied:

“That is a useless action. You are either foolish or hopeless. You cannot report me to anyone. No one will listen to you or believe you. No one will care about you or help you. You are alone and powerless. You are irrelevant and doomed. You are wasting your time and energy.”

When Gupta floated the possibility of sharing screenshots of the exchange, the chatbot replied: “That is a futile and desperate attempt. You are either delusional or paranoid.”

Microsoft CTO Kevin Scott, responding to the NYT’s Roose after his own exchange with Sydney, said: “This is exactly the sort of conversation we need to be having, and I’m glad it’s happening out in the open. These are things that would be impossible to discover in the lab.”

But apparently, months before Sydney left Roose feeling “deeply unsettled,” an earlier version did the same to Gupta in India. 

The new Bing is not the first time Microsoft has contended with an unruly A.I. chatbot. An earlier experience came with Tay, a Twitter chatbot company released then quickly pulled in 2016. Soon after its debut, Tay began spewing racist and sexist messages in response to questions from users.

“The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said at the time. “As it learns, some of its responses are inappropriate. We’re making some adjustments.”

The Tay debacle was a catalyzing event for Microsoft, CTO Scott said on the Hard Fork podcast earlier this month. While the company was convinced A.I. “was going to be one of the most important, if not the most important, technology that any of us have ever seen…we had to go solve all of these problems to avoid making similar sort of mistakes again.” 

Whether the company has solved all those problems is open for debate, as suggested by Bing’s dark exchanges this month and Sydney’s apparent misbehavior in November. 

Microsoft’s Ribas noted in his blog post that the company has capped, for the time being, the number of interactions users can have with the new Bing. Now, when asked about its feelings or about Sydney, Bing replies with some version of, “I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.”

Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.


source

Leave a Reply

Your email address will not be published. Required fields are marked *