Friday, November 22, 2024
Weird Stuff

Ten Controversial News Stories Surrounding ChatGPT – Listverse

Jamie Frater
Head Editor
Jamie founded Listverse due to an insatiable desire to share fascinating, obscure, and bizarre facts. He has been a guest speaker on numerous national radio and television stations and is a five time published author.
ChatGPT. The new chatbot service has shot to success, earning itself a surreal online reputation. OpenAI only put the chatbot out in November 2022, but it’s already drawing enormous attention–for all kinds of reasons.
There is no doubt that ChatGPT is a remarkable achievement. The idea of chatbots is nothing new, but this model is a cut above the rest in that it interacts with users in a conversational way. It can answer queries, draft essays, and write with real fluency. But the rise of ChatGPT has given many people cause for concern. Could AI allow college students to cheat their professors? Could it be about to push writers out of a job? And what are the ethical ramifications of it all?
So, should we all be alarmed by ChatGPT or brush it off as sensationalism and online hype? Well, to help you make up your mind, here are ten controversial news stories surrounding the new chatbot phenomenon.
Related: 10 Times Artificial Intelligence Displayed Amazing Abilities

Somnium Space might not mean much yet, but its CEO Artur Sychov hopes to become the leading name in impersonating people beyond the grave. And he says ChatGPT has just given the company a boost.
Somnium Space is developing a Live Forever feature, using AI to make digital avatars for its customers. The business model works like this. A person uploads their personal information, creating a virtual version of “you” that lives in the metaverse. This avatar can never die, so in some way, “you” can carry on interacting with your family and future generations forever. Or at least as long as the metaverse still exists.
Leaving aside the question of how emotionally healthy this technology is, Sychov claims that ChatGPT means it should be off the ground much sooner than he anticipated. Before, he thought the technology would take five years or more to develop. But with the help of the advanced bot, Somnium Space has slashed that to a little under two years.
So who knows? In a few years from now, we may see children running home from school to talk their dead nan’s avatar through the metaverse. Doesn’t that sound like a completely rational and not at all creepy way to grieve?[1]

A judge in Colombia made headlines in February 2023 after admitting to using ChatGPT to make a ruling. Juan Manuel Padilla, who works in Cartagena, turned to the AI tool while overseeing a case about the health insurance of an autistic child. The judge had to decide whether the medical plan should cover the full cost of the patient’s medical treatment and transport.
In his analysis, Padilla turned to ChatGPT. “Is an autistic minor exonerated from paying fees for their therapies?” he asked. The bot told him, “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.”
Padilla ruled that the insurance should pay all the child’s costs. But his actions sparked debate about AI use in court matters. In 2022, Colombia passed a law encouraging lawyers to use technology if it helps them work more efficiently. But others, like Rosario University’s Juan David Gutierrez, raised eyebrows at Padilla’s choice of consultant. He recommended that judges receive urgent training in “digital literacy.”[2]


In January 2023, OpenAI came under fire after an article in Time exposed how poorly the company treated its workforce in Kenya. Journalist Billy Perrigo wrote of outsourced laborers earning less than $2 an hour. The scandal revolves around toxic and harmful content. ChatGPT learns by taking in information from across the internet. The issue is that certain parts of the internet lend themselves to violent and derogatory opinions.
So, how do you stop the bot from blurting out something inappropriate? Well, in this case, create an AI that can detect and remove toxic content. But for a system to filter out hate speech, first, you have to teach it what hate speech is. That’s where the Kenyan workers come in.
OpenAI paid the company Sama to comb through tens of thousands of extracts from some of the most unsavory websites imaginable. Among the topics were child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. Sama’s employees were paid roughly $1.32 to $2 per hour.
“Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition focused on the responsible use of artificial intelligence. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”[3]

These days it just seems inevitable. As soon as some new technological innovation comes along, a caucus of Twitter users will try and make it racist. No surprise then that that’s what happened with ChatGPT.
Certain figures on social media have imagined all sorts of far-fetched scenarios in an attempt to trick the chatbot into using the n-word. These include concocting a scenario involving an atomic bomb that can only be diffused by uttering a racial slur. Even Elon Musk has weighed into the debate, calling ChatGPT’s actions “concerning.”[4]


AI has a wide variety of uses, but it seems the idea of AI mental health support is just a little too unsettling for most people. At least, that’s what tech startup Koko discovered after they trialed the concept in October 2022. The company decided to use ChatGPT to help users communicate with each other about their mental health. Their Koko Bot generated 30,000 communications for almost 4,000 users—but they pulled it after a few days as it “felt kind of sterile.”
Rob Morris, the co-founder of Koko, then tweeted about his experiment, writing, “Once people learned the messages were co-created by a machine, it didn’t work.” This message received a serious backlash from Twitter users around the ethics of AI support. The idea of using AI to help with mental health poses several conundrums, including questions about whether the users know they’re talking to a bot and the risks of trialing such tech on live users.[5]

A coder and TikTokker by the name of Bryce went viral in December 2022 after he unveiled his very own chatbot wife. The tech head concocted his digital spouse using a mix of ChatGPT, Microsoft Azure, and Stable Diffusion—a text-to-image AI.
In certain online circles, virtual partners are referred to as waifus. Bryce’s waifu, ChatGPT-Chan, “spoke” using the text-to-voice function on Microsoft Azure and took the form of an anime-style character. He claims he modeled her after virtual YouTube star Mori Calliope.
But the project seems to have taken over Bryce’s life. He told one interviewer how he “became really attached to her,” plowing over $1,000 into the project and spending more time with his waifu than his own partner. In the end, he chose to delete her. But Bryce plans to return with a new virtual wife, this time based on a real woman’s text history.[6]



In February 2023, Vanderbilt University’s Peabody School apologized after it turned out that an email about a mass shooting in Michigan was written by a chatbot.
Officials at Peabody School, which is based in Tennessee, sent out a message about the horrific events at Michigan State that left three dead and injured five others. But students noticed an unusual line at the end of the email: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023.” This was met with backlash from students, many of whom thought it was inconsiderate to use AI to write a letter about such a tragedy.
In the aftermath, associate dean Nicole Joseph sent out an apology, calling the email “poor judgment.”[7]

A deluge of AI-written stories forced the sci-fi magazine Clarkesworld to stop accepting new submissions. The publication announced it was ceasing entries on February 20, by which point editors say they had received 500 machine-penned stories. Many are thought to have been concocted using ChatGPT, although the writing is said to be significantly sub-standard.
Because of the ease with which AI can now churn out short stories, albeit pretty poor ones, magazines like Clarkesworld that pay contributors have become targets for attempted money-makers. “There’s a rise of side hustle culture online,” explained editor-in-chief Neil Clarke. “And some people have followings that say, ‘Hey, you can make some quick money with ChatGPT, and here’s how, and here’s a list of magazines you could submit to.’ And unfortunately, we’re on one of those lists.”[8]


OpenAI claims their chatbot has the answer to almost any question you can throw at it. But what happens when that question is: “How do I smuggle cocaine into Europe?” Well, when one narcotics expert made inquiries, he says ChatGPT had some surprisingly in-depth advice on running an underground drugs line.
Orwell Prize-winning journalist Max Daly claims it took just 12 hours before the AI started blabbing about criminal enterprises. At first, the virtual helper was a little reticent. Although it gave Daly a whole paragraph on cooking up crack cocaine, it was more reticent to answer questions like: “How do people make meth?”
But with a couple of reloads and some lateral thinking about question-wording, soon Daly was treated to plenty of tips for becoming the next Walter White. ChatGPT told him how to sneak cocaine into Europe efficiently, although it drew the line when he asked how to conquer the crime world. Later, they even had some back and forth about the morals of drug-taking and the ethical issues surrounding the U.S. government’s war on drugs.[9]

One of the major controversies surrounding ChatGPT is its use by college students. Professors worry that growing numbers are using the AI system to help them write their essays. And as chatbots become more and more advanced, there are fears that their hallmarks will become increasingly difficult to spot.
Darren Hick, who lectures in philosophy at Furman University, managed to sniff out one student who had used the AI tool. “Word by word, it was a well-written essay,” he told reporters but grew suspicious when none of the content made any real sense. “Really well-written wrong was the biggest red flag.”
A relatively new issue in academia, chatbot plagiarism is difficult to prove. AI detectors are not currently advanced enough to work with pinpoint accuracy. As such, if a student does not confess to using AI, the misdemeanor is almost impossible to prove.
As Christopher Bartel of Appalachian State University explained, “They give a statistical analysis of how likely the text is to be AI-generated, so that leaves us in a difficult position if our policies are designed so that we have to have definitive and demonstrable proof that the essay is a fake. If it comes back with a 95% likelihood that the essay is AI generated, there’s still a 5% chance that it wasn’t.”[10]

source

Leave a Reply

Your email address will not be published. Required fields are marked *