Saturday, September 28, 2024
Weird Stuff

The scary (and weird) world of artificial intelligence – San Antonio Report

12 years. 23,074 stories and counting.
We’re celebrating 12 years of the San Antonio Report! For more than a decade, we’ve provided San Antonio with reliable, local journalism without a paywall. This work relies on donations from readers like you. We have more stories to tell — but we can’t do it alone.
San Antonio Report
Nonprofit journalism for an informed community
I’m not the only one freaked out by artificial intelligence.
The AI that scares me most are chatbots, those alarmingly competent computer programs that employ machine learning, “large language models” and prediction to respond to human questions in incredibly lifelike ways, as if they themselves were human writers.
Which they’re not. Right? They’re not.
At least not yet.
Viewers of the CBS news show “60 Minutes” saw the normally unflappable journalist Scott Pelley’s head practically explode after Google’s chatbot Bard summarized the meaning of Ernest Hemingway’s famous shortest-ever short story: “For sale. Baby shoes. Never worn.”
Within seconds, Bard had created an essay about “a man whose wife couldn’t conceive and a stranger grieving after a miscarriage and longing for closure.” Then Bard turned the essay into a pretty good poem.
Gulp.
Columnist David Brooks opined in a recent column in The New York Times that he, too, worries that AI will rise up in some dystopian future, escape its metal hardware and become vastly superior robotic overlords to the feckless humans who created it.

Brooks used to think that AI like chatbots could never match human thinking. Their neural networks are vast and gobsmackingly faster than how human brains work. But they lack human depth and wisdom: They can’t feel or create.
Then he learned that Douglas Hofstadter, a leading AI expert, had changed his mind and is now quaking in his boots about what the future holds, especially when it comes to chatbots, which, he told Brooks, “render humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.”
Double gulp.
Hofstadter said chatbots are doing things he never imagined possible, such as exhibiting the ability to synthesize information and draw parallels and conclusions in much the way human brains do.
They seem to be developing a kind of consciousness, a sort of quasi-aliveness. And who the hell knows where that will eventually lead, he said.
I recently got a taste of chatbot weirdness myself. 
My husband and I belong to a nonfiction book club and this month the tome we’re reading is “AI 2041: Ten Visions for Our Future.” It combines speculative stories with analysis to explore the world of AI and what it might mean for our human future.
As a lark, one club member decided to ask the chatbot ChatGPT to write an essay about our group, providing only a smattering of details: That we’re all 65 or older; that our number includes a physicist, a doctor, a journalist, a teacher and a lawyer; and that we meet on Sundays. 
In response, the chatbot glommed onto the physics theme (I suppose because a physicist was listed first) and gave our book club the name The Quantum Chronicles. (We may get T-shirts made.)
It decided we were reading “The Quantum Paradox,” a real book by a physicist that “explores the intersection of quantum mechanics and human consciousness, raising questions about the fabric of reality itself.”
The essay went on for several paragraphs to explain what each member both contributed to and got from the club, using banal, generic language apparently drawn from stereotypical traits linked to the nature of our respective professions.
But then something strange happened.
At one of our meetings, the clock on the wall “glitched” and we were all transported back to 1935, “the golden age of physics,” where we hobnobbed with the physicists “Einstein, Bohr, and Schrödinger” and learned more about the quantum world. But soon we came to our senses and realized our presence could cause a rip in the time-space continuum, so we hightailed it back to 2023. 
So why did the chatbot “decide” to take a brief detour into science fiction? Who knows?
Another member did the same experiment, but asked ChatGPT to write in the style of Mark Twain. 
It created the same format — using the traits and bents of mind common to our various professions to color our respective contributions and responses to the book under discussion (naturally, Adventures of Huckleberry Finn) but it also gave us all dandy, Twain-esque names.
What’s interesting is that both versions concluded with the observation that the book club caused us all to become close friends.
Which is exactly what’s happened. Perhaps the chatbot surveyed the gazillion stories out there about book clubs and concluded all members invariably become good friends.
But I know from personal experience that’s not always true.
Which brings us to perhaps the scariest element of AI: All those techie head honchos in charge of it don’t seem to completely know what’s going on.
They speak of the “black box” problem, where the machines start teaching themselves things and making decisions, the exact process of which the humans at the helm are totally in the dark.
Meaning, these powerful new tools are being unleashed upon society without the necessary guardrails in place — and without the folks in charge even knowing what the guardrails should be.
Sometimes AI’s mistakes — or “hallucinations,” as they’re called — are humorous.
When “60 Minutes’” Pelley asked Bard to explain inflation, it responded instantly with a report that included a list of five books for suggested reading, none of which exist. Bard made them up. 
Recently, my husband asked both Bard and ChatGPT to write biographies about me.
Both got the basics right. They wrote that I’m a retired American journalist who worked for over 30 years as a reporter, columnist and feature writer. They stated accurately that I began my career at the now-defunct San Antonio Light in 1983, then moved to the Houston Chronicle in 1989, where I worked for eight years. 
Bard correctly stated that in 2001, I returned to San Antonio to work for the San Antonio Express-News. It said that over my career I focused on human interest stories that gave a “voice to the voiceless” and covered a wide range of topics, from crime and poverty to education and social services. 
ChatGPT noted I was also a columnist for the Express-News, “where [I] wrote about parenting, relationships, and other personal topics.” It said nice things about my craft.
But then Bard went off the rails, claiming I was the author of two books, The Lost Daughters of Texas and The Girl in the Green Dress. If I wrote these books, I must have done so in my sleep!
Bard had me winning the Pulitzer Prize for Feature Writing in 1998. (That’s news to me, and I think I would’ve remembered.) It honored me with an induction into the San Antonio Women’s Hall of Fame. (Where the hell’s my plaque?)
ChatGPT furthered the lies, bestowing upon me a bachelor’s degree in journalism from the University of Texas at Austin (nope) and career stints working at the now-defunct Dallas Times Herald and The Boston Globe (nope and nope).
Aside from its tendency to make mistakes, no doubt owing to the vastness of its lightning-quick searches and its habit of subsuming anything that seems tangentially related to the query at hand, the true danger with AI is its potential impact on the real world.
I’m not talking just about the risk of student plagiarism, but about the fear AI will inexorably replace human workers, something that’s already happening at call centers and other places. (Its proponents argue that AI will create more jobs than it eliminates.)
This scenario isn’t limited to work that’s repetitive or assembly-line-related, but careers that involve human ingenuity and creativity — the realm of so-called “knowledge workers” like writers, artists, architects, and, yes, even software engineers.
Already the possibility lurks that AI could one day replace journalists and even book authors. (At this point, I would like to state unequivocally that this column is 100% human-produced, with no help from a chatbot.) In response, 9,000 writers recently wrote an open letter chastising tech companies for how chatbots exploit copyright-protected work without consent, credit or compensation. 
AI is expected to one day replace human doctors (able to diagnose disease thousands of times faster) and medical researchers (able to create vaccines in the blink of an eye) and those are both potentially good things.
But what about the role humanity plays in healing — the therapeutic connection between a human doctor and a human patient? Can a robot do that? What about the link between a teacher and student? Can a chatbot on a laptop really inspire a young person to achieve?
Scariest of all is the way AI and chatbots can be used for malevolent purposes by humans who want to do harm.
I’m talking about deepfakes (videos in which a person’s face or voice is digitally altered) and the spread of false news, especially going into the next presidential election, when a big chunk of the American electorate is primed to believe any conspiracy theory or poppycock story spewed into the public domain.
A recent report showed that current AI safety measures aren’t up to the task when it comes to stopping the spread of hate speech, disinformation and other toxic material.
In an age of Colbertian “truthiness,” where people cling to their own preferred reality no matter what the real-world facts say, chatbots and their ilk are tantamount to loaded guns of the cyber variety.
The book my club is reading predicts two possible futures for AI — apart from a complete robot takeover, of course. In one, AI, in partnership with humanity, ushers in a utopian era of “plentitude,” where everything is free, no one has to work and people gain meaning by doing compassionate acts. In the other, humans twist the gains of AI toward their own selfish ends, furthering inequality.
Given humanity’s record thus far, which scenario do you find most plausible?

Melissa Fletcher Stoeltje has worked in Texas newspaper journalism for more than three decades, at the San Antonio Light, the Houston Chronicle and the San Antonio Express-News. She holds bachelor’s…
The San Antonio Report is a member-supported, nonprofit news organization.

source

Leave a Reply

Your email address will not be published. Required fields are marked *