Thursday, July 4, 2024
Business

Why picking citizens at random could be the best way to govern the A.I. revolution

Testifying before Congress last month about the risks of artificial intelligence, Sam Altman, the OpenAI CEO behind the massively popular large language model (LLM) ChatGPT, and Gary Marcus, a psychology professor at NYU famous for his positions against A.I. utopianism, both agreed on one point: They called for the creation of a government agency comparable to the FDA to regulate A.I. Marcus also suggested scientific experts should be given early access to new A.I. prototypes to be able to test them before they are released to the public.

Strikingly, however, neither of them mentioned the public, namely the billions of ordinary citizens around the world that the A.I. revolution, in all its uncertainty, is sure to affect. Don’t they also deserve to be included in decisions about the future of this technology?

We believe a global, democratic approach–not an exclusively technocratic one–is the only adequate answer to what is a global political and ethical challenge. Sam Altman himself stated in an earlier interview that in his “dream scenario,” a global deliberation involving all humans would be used to figure out how to govern A.I.

There are already proofs of concept for the various elements that a global, large-scale deliberative process would require in practice. By drawing on these diverse and complementary examples, we can turn this dream into a reality.

Deliberations based on random selection have grown in popularity on the local and national levels, with close to 600 cases documented by the OECD in the last 20 years. Their appeal lies in capturing a unique array of voices and lived experiences, thereby generating policy recommendations that better track the preferences of the larger population and are more likely to be accepted. Famous examples include the 2012 and 2016 Irish citizens’ assemblies on marriage equality and abortion, which led to successful referendums and constitutional change, as well as the 2019 and 2022 French citizens’ conventions on climate justice and end-of-life issues.

Taiwan has successfully experimented with mass consultations through digital platforms like Pol.is, which employs machine learning to identify consensus among vast numbers of participants. Digitally engaged participation has helped aggregate public opinion on hundreds of polarizing issues in Taiwan–such as regulating Uber–involving half of its 23.5 million people. Digital participation can also augment other smaller-scale forms of citizen deliberations, such as those taking place in person or based on random selection.

Deliberations among randomly selected participants have been tried out globally, too. On the sidelines of the 26th U.N. Climate Change Conference of the Parties in 2021, a pilot for a global climate assembly was convened online. Meta, the parent company of Facebook, also recently ran so-called community forums among 6,000 randomly selected users divided into smaller groups for discussing cyberbullying regulations in the metaverse.

Finally, there is a rich worldwide tradition of participatory policymaking, from Swiss Landsgemeinde and New England town meetings to Indian Gram Sabhas, East African Barazas, and Brazilian participatory budgeting processes–all giving voice to those willing to participate.

It’s time for these elements to be combined and deployed on the topic of A.I. governance at a scale commensurate to the issue at hand. A good precedent for this is the European Commission’s 2021-22 Conference for the Future of Europe, which combined an innovative multilingual digital platform, four panels of 200 randomly selected European citizens, and several national citizen panels, allowing 5 million website visitors and 700, 000 event participants to debate about Europe’s challenges and priorities.

When it comes to A.I. regulation, one could imagine a global assembly running alongside other localized assemblies. The global body would build a consensus on the boundaries of A.I. and delineate a universal bill of rights that should inform its continued development. The localized assemblies would then craft specific policies for particular political and regulatory contexts.

These assemblies could be augmented with wide crowdsourcing technologies that tap into the collective intelligence of the broader population and track public opinion. They should use A.I., too, helping citizens better understand the promise and challenge of the technology while increasing the efficiency, transparency, and quality of deliberations. In fact, a new program just launched by OpenAI will award “grants to fund experiments in setting up a democratic process for deciding what rules A.I. systems should follow, within the bounds defined by the law.” This is a welcome invitation for creative thinking on inclusive and representative processes to determine the future development of this technology.

There are important questions left to resolve, such as who will design, govern, and organize this endeavor. We envisage the authority to commission and initiate the process coming from the U.N., which is the most likely source of legitimacy in the global arena. But the design, governance, and organization of these deliberations should be left to distinct groups of stakeholders, including leaders in A.I. and participatory democracy.

Crucially, we also suggest bringing into the governance body former members of citizens’ assemblies and similar processes in a way that balances Global North-South representation. This foundation would pave the way for the mutual accountability and civic leadership indispensable to the success and legitimacy of such a global initiative.

Establishing a new model for global governance is not only key to reining in A.I.–it will also set an important precedent for how to manage other 21st-century issues such as climate justice.

Creating A.I.-specific government agencies may be part of the solution. But we should also start planning for more ambitious, global, and democratic solutions. By drawing on the collaboration, creativity, innovation, resilience, and vision of the global citizenry, we can chart a sustainable and deliberative course for A.I. governance and free the future–together.

Hélène Landemore is a professor of political science at Yale University, a fellow at the Ethics in AI Institute at the University of Oxford, an advisor to the Democratic Inputs to AI program at OpenAI, and an advisor to the non-profit organization DemocracyNext. She served on the Governance Committee of the most recent French Citizens’ Convention and is currently undertaking work supported by Schmidt Futures, a philanthropic initiative founded by Eric and Wendy Schmidt, through the AI2050 program.

Andrew Sorota is a researcher at Schmidt Futures working on projects at the intersection of democracy and technology.

Audrey Tang is Taiwan’s Minister of Digital Affairs and chairs the National Institute of Cyber Security.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

More must-read commentary published by Fortune:

source

Leave a Reply

Your email address will not be published. Required fields are marked *