Monday, November 25, 2024
Business

Scientists inspired the right guardrails for nuclear energy, the internet, and DNA research. Let them do the same for AI

In July 1957, 22 prominent scientists gathered quietly at a private lodge in Pugwash, a small town in Canada’s Nova Scotia province. They had answered a call to action by Albert Einstein, inviting scientists to shape guardrails that would contain the danger of nuclear weapons. The Pugwash  Conference earned a Nobel Peace Prize, and more importantly, it laid the foundations for the nuclear non-proliferation treaties, which saved our world from risks of annihilation.

Today, governments and businesses are frantically searching for ways to limit the many feared perils of AI–especially those from Artificial General Intelligence (AGI), the next phase of AI evolution. AGI will perform a wide range of cognitive tasks with an efficiency and accuracy far superior to current AI systems. This next stage of A.I., often referred to by Silicon Valley enthusiasts as “God-like,” is expected to surpass human intelligence and efficiency by a substantial margin. It is rumored that an internal report on the risks of AGI may be what ignited the recent board drama at OpenAI, the maker of ChatGPT. But while the race to build AGI is still in progress, we can be certain that whoever controls it will have enormous sway on society and the economy, potentially exerting influence on the lives of humans everywhere.

In the past year, numerous and uncoordinated efforts by government and business to contain AI sprang across the world, in the U.S., China, the EU, and the U.K. Businesses have been “pleading” with governments to regulate their AI creations, whilst knowing full well that governments will never succeed in regulating effectively at the speed of A.I. evolution. The EU recently completed a multi-year effort to deliver the AI Act. However, the shifts in generative AI capabilities mean that by the time it is enacted in 2025, the new AI Act may already be outdated.

Governments are not equipped to outgallop fast-moving technologies with effective rules and policies–especially in the early hyperfast stages of development. Moreover, AI technologies have a transnational borderless reach, limiting the effectiveness of national and regional rule systems to govern them. As for businesses, they are in intense competition to dominate and profit from these technologies. In such a race, fueled by billions of investments, safety guardrails are inevitably a low priority for most businesses.

Ironically, governments and businesses are in fact the two stakeholders who are most in need of guardrails to prevent them from misusing A.I. in surveillance, warfare, and other endeavors to influence or control the public.

Who can be trusted with shaping A.I. guardrails?

A careful analysis of how prior technologies and scientific innovations were tamed in the 20th century offers a clear answer to this dilemma. Guardrails were designed by scientists who know their own creations and understand (better than most) how they might evolve.

At Pugwash, influential scientists came together to develop strategies to mitigate the risks of nuclear weapons, significantly contributing to the formulation of arms control agreements and fostering international dialogue during the tense Cold War era.

In February 1975, at the Asilomar Conference in California, it was again scientists who met and successfully established critical guidelines for the safe and ethical research of DNA, thereby preventing potential biohazards. The Asilomar guidelines not only paved the way for responsible scientific inquiry but also informed regulatory policies worldwide. More recently, it was again the scientists and inventors of the Internet, led by Vint Cerf, who convened and shaped the framework of guardrails and protocols that made the Internet thrive globally.

All these successful precedents are proof that we need businesses and governments to first make space and let A.I. scientists shape a framework of guardrails that contain the risks without limiting the many benefits of A.I. Businesses can then implement such a framework voluntarily, and only when necessary, governments should step in to enforce the implementation by enacting policies and laws based on the scientists’ framework. This proven approach worked well for nuclear technology, DNA, and the Internet. It should be a blueprint to build safer AI.

A “Pugwash Conference for AI scientists” is therefore urgently needed. The conference should include no more than two dozen scientists, in the mold of Geoffrey Hinton who chose to quit Google in order to speak his mind on AI’s promise and perils.

Like Pugwash, the scientists should be chosen from all the key countries where advanced A.I. technologies are developing, in order to at least strive for a global consensus. Most importantly, the choice of the participants at this seminal A.I. conference must reassure the public that the conferees are shielded from special interests, geopolitical pressures, and profit-centric motives.

While hundreds of government leaders and business bosses will cozy up to discuss A.I. at multiple annual international events, thoughtful and independent A.I. scientists must urgently get together to make A.I. good for all.

Fadi Chehadé is chairman, cofounder, and managing partner of Ethos Capital. He founded several software companies and was a fellow at Harvard and Oxford. From 2012 to 2016 he led ICANN, the technical institution that sets the  global rules and policies for the internet’s key resources.

More must-read commentary published by Fortune:

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

source

Leave a Reply

Your email address will not be published. Required fields are marked *