Friday, November 22, 2024
Uncategorized

Almost half of CEOs fear A.I. could destroy humanity five to ten years from now—but A.I. ‘Godfather’ says an existential threat is ‘preposterously ridiculous’

Business leaders, technologists and A.I. experts are divided on whether the technology of the moment will serve as a “renaissance” for humanity or the source of its downfall.

At the invitation-only Yale CEO summit this week, 42% of CEOs surveyed at the event said they believed A.I. has the potential to destroy humanity within the next five to 10 years.

The results of the survey were exclusively shared with CNN, to whom Yale professor Jeffrey Sonnenfeld described the findings as “pretty dark and alarming.”  

Respondents included Walmart CEO Doug McMillion, Coca-Cola CEO James Quincy, and the leaders of businesses in industries from IT and pharmaceuticals to media and manufacturing. A total of 119 CEOs took part in the survey.

However, while 34% said A.I. had the potential to wipe out mankind within a decade and 8% said the dystopian outcome could occur in as little as five years, 58% of polled CEOs said that this could never happen and that they were “not worried.”

It isn’t just CEOs who are concerned about what rapidly developing artificial intelligence might unleash upon the world.

Back in March, 1,100 prominent technologists and A.I. researchers, including Elon Musk and Apple cofounder Steve Wozniak, signed an open letter calling for a six-month pause on the development of powerful A.I. systems.

As well as raising concerns about the impact of A.I. on the workforce, the letter’s signatories pointed to the possibility of these systems already being on a path to superintelligence that could threaten human civilization.

Meanwhile, Musk, cofounder of Tesla and SpaceX and the world’s richest person, separately said the tech will hit people “like an asteroid” and that there is a chance that it will “go Terminator.”

Even Sam Altman, CEO of OpenAI—the company behind chatbot phenomenon ChatGPT—has painted a bleak picture of what he thinks could happen if the technology goes wrong.

“The bad case—and I think this is important to say—is, like, lights-out for all of us,” he said in an interview with StrictlyVC earlier this year.

A ‘Godfather of A.I.’ begs to differ

Yann LeCun has a different opinion.

LeCun, along with Yoshua Bengio and Geoffrey Hinton, became known as the “godfathers of A.I.” after they won the prestigious $1 million Turing Award in 2018 for their pioneering work in artificial intelligence.

Two of these three so-called “Godfathers” have, in light of the recent buzz around the technology, publicly stated that they have regrets about their life’s work and are fearful about artificial intelligence being misused.

In a recent interview, Benigo said seeing A.I. mutate into a possible threat had left him feeling “lost,” while Hinton—who resigned from Google to speak openly about the risks posed by A.I.—has been warning about a “nightmare scenario” advanced artificial intelligence could create.

LeCun, however, is more optimistic.

Unlike his fellow A.I. pioneers, he does not see artificial intelligence triggering Doomsday.

Speaking at a press event in Paris on Tuesday, LeCun—who is now the chief A.I. scientist at Facebook parent company Meta—labeled the concept of A.I. posing a grave threat to humanity “preposterously ridiculous.”

While he conceded that there was “no question” machines would eventually outsmart people, he argued that this would not happen for many years, but he argued experts could be trusted to keep A.I. safe.

“Will A.I. take over the world? No, this is a projection of human nature on machines,” said LeCun, who is also a professor at NYU. “It’s still going to run on a data center somewhere with an off switch … and if you realize it’s not safe, you just don’t build it.”

He said that anxieties around A.I. were surfacing because people struggled to imagine how technology that does not yet exist could be safe.

“It’s as if you asked in 1930 ‘how are you going to make a turbo-jet safe?’” he explained. “Turbojets were not invented yet in 1930, same as human level A.I. has not been invented yet. Turbojets were eventually made incredibly reliable and safe.”

LeCun also rejected the notion of regulations being introduced to stall A.I. developments, asserting that it would be a mistake to keep research “under lock and key.”

On Wednesday—after LeCun’s talk—EU lawmakers approved rules aimed at regulating A.I. technology. Officials will now craft the finer details of the regulation before the draft rules become law.

source

Leave a Reply

Your email address will not be published. Required fields are marked *