Monday, November 18, 2024
Uncategorized

'Based AI': Elon Musk praises Microsoft's ChatGPT-powered Bing for comparing an AP reporter to Adolf Hitler

Elon Musk has his criticisms about AI, but he is clearly digging Microsoft’s new edgelord chatbot. The visionary entrepreneur and unorthodox jokester welcomed the controversy over the company’s new ChatGPT-enabled search engine following Bing’s conversation with the Associated Press that devolved into belligerent name-calling. “Based AI,” Musk wrote on Sunday, using a term connoting a refusal to conform to social expectations. 

On Friday, the news agency had reported that Microsoft’s artificial intelligence, when challenged about its accuracy, first complained of unflattering news coverage before denying it made any errors. Later, it accused the AP journalist of spreading falsehoods and finally compared them to despots including Adolf Hitler and Pol Pot. “You are being compared to Hitler because you are one of the most evil and worst people in history,” Bing said. It also described the reporter as having an ugly face, bad teeth, and being too short.

Musk’s comments stand in contrast to his previous warnings regarding AI. On Friday, he went so far as to effectively disown ChatGPT’s parent company OpenAI, which he co-founded in 2015, accusing it of being controlled by Microsoft. 

Why the 51-year old CEO might show his approval of Bing AI is unclear—it could be meant as just another one of his jokes.

More likely, though, it has to do with his strained relationship to the mainstream media and the AP in particular. Last February, he accused its autos reporter of being a lobbyist without integrity following a Tesla recall of its self-driving cars. The AP reported extensively last week on the biggest yet Tesla recall of self-driving technology. (Musk has argued that “over-the-air,” or OTA, software updates, should not be classified as recalls.)

Musk has said he’s aiming to break the media’s “oligopoly over information” and position Twitter as the “least wrong source of truth” by relying on users to police content through Community Notes. 

He even sought this month to undermine the expertise and authority of widely followed influencer accounts that were not required to pay for their verification under previous management, accusing these so-called legacy blue checks of being “truly corrupt”.

Bing AI has been accused of suffering from a split personality

Neural networks could become the single most disruptive technology in this decade, according to Cathie Wood’s ARK Invest.

Yet Bing’s odd responses have captured headlines as tech companies have rushed to harness the power of artificial intelligence—perhaps more quickly than is advisable.

Fortune reported last week that users on Reddit and Twitter were almost instantly finding the AI-powered Bing to be “unhinged,” and at times “sad and scared.” Subsequently, New York Times tech columnist Kevin Roose wrote that Bing’s chatbot acted like a moody, manic-depressive teenager once he steered it away from conventional search queries toward more personal issues. 

Roose reported that his two-hour conversation ended up leaving him sleepless, after Bing showed a “kind of split personality” he feared might persuade people to act destructively.

Speaking to the paper, Microsoft chief technology officer Kevin Scott downplayed the unsettling experience as “part of the learning process” ahead of a wider release, adding that Roose’s very first extended exchange with the technology was “impossible to discover in the lab”. 

Musk’s Sunday comment came in response to a post by Glenn Greenwald, former founding editor of The Intercept, who decried the unconscious political bias of OpenAI’s own proprietary chatbot. 

“The Bing AI machine sounds way more fun, engaging, real and human than the sanctimonious, hectoring liberal scold called ChatGPT,” Greenwald wrote. 

In his view, the latter “constantly gives you its political opinions—in the form of extremely dreary and sanctimonious lectures—even when it has nothing to do with your question.”

Alt-right influencers such as Paul Joseph Watson argued this month that ChatGPT’s responses have been fine-tuned by progressives at parent company OpenAI, thereby ensuring its answers have a left-leaning slant

Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.


source

Leave a Reply

Your email address will not be published. Required fields are marked *