AI's existential threat is a ‘completely bonkers distraction’ because there are 'like 101 more practical issues' to talk about, top founder in the field says
Elon Musk has repeatedly referred to AI as a “civilizational risk.” Geoffrey Hinton, one of the founding fathers of AI research, changed his tune recently, calling AI an “existential threat.” And then there’s Mustafa Suleyman, cofounder of DeepMind, a firm formerly backed by Musk that has been on the scene for over a decade, and coauthor of the newly released “The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma.” One of the most prominent and longest-tenured experts in the field, he thinks such far-reaching concerns aren’t as pressing as others make them out to be, and in fact, the challenge from here on out is pretty straightforward.
The risks posed by AI have been front and center in public debates throughout 2023 since the technology vaulted into the public consciousness, becoming the subject of fascination in the press. “I just think that the existential-risk stuff has been a completely bonkers distraction,” Mustafa told MIT Technology Review last week. “There’s like 101 more practical issues that we should all be talking about, from privacy to bias to facial recognition to online moderation.”
The most pressing issue, in particular, should be regulation, he says. Suleyman is bullish on government’s across the world being able to effectively regulate AI. “I think everybody is having a complete panic that we’re not going to be able to regulate this,” Suleyman said. “It’s just nonsense. We’re totally going to be able to regulate it. We’ll apply the same frameworks that have been successful previously.”
His conviction is in part borne of the successful regulation of past technologies that were once considered cutting edge such as aviation and the internet. He argues: Without proper safety protocols for commercial flights, passengers would have never trusted airlines, which would have hurt business. On the internet, consumers can visit a myriad of sites but activities like selling drugs or promoting terrorism are banned—although not eliminated entirely.
On the other hand, as the Review‘s Will Douglas Heaven noted to Suleyman, some observers argue that current internet regulations are flawed and don’t sufficiently hold big tech companies accountable. In particular, Section 230 of the Communications Decency Act, one of the cornerstones of current internet legislation, which offers platforms safe harbor for content posted by third party users. It’s the foundation on which some of the biggest social media companies are built, shielding them from any liability for what gets shared on their websites. In February, the Supreme Court heard two cases that could alter the legislative landscape of the internet.
To bring AI regulation to fruition, Suleyman wants a combination of broad, international regulation to create new oversight institutions and smaller, more granular policies at the “micro level.” A first step that all aspiring AI regulators and developers can take is to limit “recursive self improvement” or AI’s ability to improve itself. Limiting this specific capability of artificial intelligence would be a critical first step to ensure that none of its future developments were made entirely without human oversight.
“You wouldn’t want to let your little AI go off and update its own code without you having oversight,” Suleyman said. “Maybe that should even be a licensed activity—you know, just like for handling anthrax or nuclear materials.”
Without governing some of the minutiae of AI, inducing at times the “actual code” used, legislators will have a hard time ensuring their laws are enforceable. “It’s about setting boundaries, limits that an AI can’t cross,” Suleyman says.
To make sure that happens, governments should be able to get “direct access” to AI developers to ensure they don’t cross whatever boundaries are eventually established. Some of those boundaries should be clearly marked, such as prohibiting chatbots to answer certain questions, or privacy protections for personal data.
Governments worldwide are working on AI regulations
During a speech at the UN Tuesday, President Joe Biden sounded a similar tune, calling for world leaders to work together to mitigate AI’s “enormous peril” while making sure it is still used “for good.”
And domestically, Senate majority leader Chuck Schumer (D-N.Y.) has urged lawmakers to move swiftly in regulating AI, given the rapid pace of change in the technology’s development. Last week, Schumer invited executives from the biggest tech companies including Tesla CEO Elon Eon Musk, Microsoft CEO Satya Nadella, and Alphabet CEO Sundar Pichai to Washington for a meeting to discuss prospective AI regulation. Some lawmakers were skeptical of the decision to invite executives from Silicon Valley to discuss the policies that would seek to regulate their companies.
One of the earliest governmental bodies to regulate AI was the European Union, which in June passed draft legislation requiring developers to share what data is used to train their models and severely restricting the use of facial recognition software—something Suleyman also said should be limited. A Time report found that OpenAI, which makes ChatGPT, lobbied EU officials to weaken some portions of their proposed legislation.
China has also been one of the earliest movers on sweeping AI legislation. In July, the Cyberspace Administration of China released interim measures for governing AI, including explicit requirements to adhere to existing copyright laws and establishing which types of developments would need government approval.
Suleyman for his part is convinced governments have a critical role to play in the future of AI regulations. “I love the nation-state,” he said. “I believe in the power of regulation. And what I’m calling for is action on the part of the nation-state to sort its shit out. Given what’s at stake, now is the time to get moving.”