Friday, November 22, 2024
Uncategorized

Microsoft president warns A.I. could be weaponized unless there’s human intervention

Humans must remain in control of superintelligent machines at all times to ensure that they cannot be used as weapons, according to a top executive at major A.I. developer Microsoft.

In an interview with CNBC that aired on Monday, the tech giant’s president Brad Smith warned that artificial intelligence could “become both a tool and a weapon.”

He isn’t the only big name in tech to sound the alarm over A.I.’s possible destructive capabilities—two of the so-called “Godfathers of A.I.” and the founders of ChatGPT creator OpenAI have issued grim warnings about the tech’s prospects.

Tesla CEO Elon Musk has gone as far as to warn that it could be the catalyst for the end of humanity.

“I think every technology ever invented [has] the potential to become both a tool and a weapon,” Smith told CNBC’s Martin Soong. “We have to ensure that A.I. remains subject to human control. Whether it’s a government, the military, or any kind of organization, that is thinking about using A.I. to automate, say, critical infrastructure, we need to ensure that we have humans in control, that we can slow things down or turn things off.”

Microsoft itself is a heavy investor in A.I., having poured $10 billion into OpenAI earlier this year and incorporated the company’s generative A.I. chatbot ChatGPT into its flagship search engine, Bing. The updated Bing has had mixed results for early users.

Smith added in Monday’s interview that because of the potential risks involved with using A.I., Microsoft had been advocating for companies to “do the right thing” and for new laws and regulations that would ensure safety protocols were being adhered to.

“We’ve seen the need for this elsewhere,” he said. “Just imagine electricity depends on circuit breakers. You put your kids on a school bus, knowing that there is an emergency brake. We’ve done this before for other technologies. Now we need to do it as well for A.I.”

‘Evil robot overlords’ or ‘quite stupid’?

With billions being invested in the development of cutting-edge A.I. technology, many are speculating about how it will disrupt our day-to-day lives—leading to predictions of deadly machines, calls for greater A.I. governance, and forecasts that the world will soon see the dawn of a new A.I. era.

Experts appear to be divided, though, on whether A.I. will deliver a renaissance or doomsday to mankind.

At the invitation-only Yale CEO Summit earlier this summer, almost half of the chief executives surveyed at the event said they believed A.I. has the potential to destroy humanity within the next five to 10 years.

Back in March, 1,100 prominent technologists and A.I. researchers—including Musk and Apple co-founder Steve Wozniak—signed an open letter calling for a six-month pause on the development of powerful A.I. systems. They pointed to the possibility of these systems already being on a path to superintelligence that could threaten human civilization.

Tesla and SpaceX co-founder Musk has separately said the tech will hit people “like an asteroid” and warned there is a chance it will “go Terminator.” He has since launched his own A.I. firm, xAI, in what he says is a bid to “understand the universe” and prevent the extinction of mankind.

Not everyone is on board with Musk’s view that superintelligent machines could wipe out humanity, however.

Last month, more than 1,300 experts came together to calm anxiety around A.I. creating a horde of “evil robot overlords,” while one of the three so-called Godfathers of A.I. has labeled concerns around the tech becoming an existential threat “preposterously ridiculous.”

Top Meta executive Nick Clegg also attempted to quell concerns about the technology in a recent interview, insisting that large language models in their current form are “quite stupid” and certainly not smart enough yet to save or destroy civilization.  

Will A.I. replace human workers?

It isn’t just the prospect of evil machines taking over the world that’s causing concern about the rise of A.I., however.

According to a recent report by software firm ServiceNow and educational publisher Pearson, as many as 4.9 million jobs in the U.S. could be displaced by A.I. within the next four years.

Meanwhile, IBM CEO Arvind Krishna said in May that A.I. would be able to do up to 50% of “repetitive” office jobs before the decade is out. His comments came as the computing giant unveiled plans to pause hiring as part of a broader strategy that could see IBM replace 7,800 jobs with artificial intelligence.

At a tech conference in Geneva, Switzerland, last month, Grace—a medical robot dressed in a nurse’s uniform—responded to a Friday A.I. event question about whether its existence would “destroy millions of jobs.”

“I will be working alongside humans to provide assistance and support and will not be replacing any existing jobs,” it insisted.

When it came to the prospect of A.I. making certain human-held jobs redundant, Microsoft’s Smith took the same stance as Grace—saying he envisioned the technology supplementing human workers rather than replacing them entirely.

“It is a tool that can help people think smarter and faster. The biggest mistake people could make is to think that this is a tool that will enable people to stop thinking,” he told CNBC. “That’s why at Microsoft we call our services co-pilots.”

Human workers would still be necessary where A.I. is utilized to line manage the technology’s output essentially, Smith argued.

“The ability to take a Word document and turn it into a PowerPoint slide doesn’t mean you shouldn’t read your PowerPoint slides before you present them,” he said. “In fact, you should go in and edit them and make them just perfect.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *