Friday, November 22, 2024
Uncategorized

Over 150 top execs fear Europe will create a ‘critical productivity gap’ with the U.S. if the EU overregulates A.I.

European business leaders are worried that the EU will overregulate A.I. and leave Europe trailing the U.S. in future productivity. A group of over 150 executives including the CEOs of Renault and Siemens, the executive director of Heineken, and the chief A.I. scientist at Meta, signed an open letter to the European Parliament on Friday, requesting that it pull back from its proposed restrictions.

“Such regulation could lead to highly innovative companies moving their activities abroad, investors withdrawing their capital from the development of European foundation models and European A.I. in general,” the letter read. “The result would be a critical productivity gap between the two sides of the Atlantic.”

Heineken, Renault, and Meta did not respond to Fortune’s request for comment.

The letter takes aim at a proposed law that’s been in the works for two years and was green-lit on June 14. The final version of the law may be passed later this year. Called the A.I. Act, it would be the strictest regulation of generative A.I. in the world, dividing the use of A.I. into various risk categories. For uses of the technology labeled “high risk,” companies would have to pass multiple rounds of tests for their activities to be approved, akin to pharmaceutical companies conducting clinical trials for new drugs. High-risk uses of A.I. would include applications in the energy or legal system, or applications that have the potential to harm or disadvantage people, including use in screening job applications or for distributing government benefits.

“The first problem is that it’s the first of its kind worldwide,” said Georg Ringe, cofounder of the Hamburg Network for A.I. and Law. “We’re not as fast as U.S. firms with technology side, but we are already the first ones moving ahead with regulation, and that makes it even more difficult to catch up.”

Some uses of A.I. would be completely banned under the proposed framework, including facial recognition A.I. and the scraping of biometric data (unique data that can identify individuals) from social media. The A.I. Act also emphasizes transparency and would require companies to disclose much more data on their technology than they currently do, including the copyrighted material used to train their systems. The executives who signed the letter think these regulations could curtail progress and fumble a huge opportunity for Europe to get ahead in the A.I. productivity race.

“We are convinced that our future significantly depends on Europe becoming part of the technological avant-garde, especially in such an important field as [generative] artificial intelligence,” the letter read. “For this reason, we appeal to the European decision-makers to revise the latest version of the A.I. Act and agree on a proportionate, forward-looking legislation which will contribute to European competitiveness while protecting our society.”

The addition of bureaucracy, safeguards, and transparency requirements may drive A.I. business away from Europe, but Ringe says the big players likely don’t have to worry. It’ll be the startups who will bear the brunt of the restrictions’ burden.

“The Googles of this world, they have an armada of lawyers,” said Ringe, who is also a corporate law professor at the University of Hamburg. “For them, it’s not going to be a super big problem, but the problem will be for startups and small new firms. They may think twice of either doing anything, or doing it in Europe.” 

The EU’s regulations are the furthest along in the world, with policymakers in the U.S. and China still in the drafting phase. In Washington, senators are being briefed on A.I. this summer and will consider legislation in coming months. Beijing released a set of not yet approved draft rules for A.I. in April, which required strict adherence to the Chinese Communist Party’s censorship rules. 

Being the first to pass A.I. legislation could prove an advantage, according to Ringe. In the past, the EU has been a global pioneer in technology regulation, namely with the General Data Protection Regulation implemented in 2018. The law gave Europeans increased rights and control over their data, and inspired similar models in other countries, eventually becoming somewhat of a global standard.  

“To be the first mover globally can also be an advantage sometimes, because you set the pace and hope that other parts of the world will follow suit,” Ringe said.

The letter to the EU compared A.I. to the invention of the internet and silicon chips, saying that the nations that develop the most powerful large language models (the category of chatbot that includes GPT-4 and Bard) will have a competitive advantage on the world stage.

“Under the version recently adopted by the European Parliament, foundation models, regardless of their use cases, would be heavily regulated, and companies developing and implementing such systems would face disproportionate compliance costs and disproportionate liability risks.”

The letter proposed that the EU stick to developing “broad principles” with an expert regulatory body, rather than a strict set of restrictions. It also said the regulatory body should stay agile and constantly adapt to the pace of A.I. advancement and emerging risks, and stay “in dialogue with the economy,” underscoring the execs’ anxiety over being sidelined in the business world.

In the future, A.I. will replace search engines and will provide everyone with a personal assistant, changing not just the economy but also culture, according to the statement.

“In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” the letter read. 

Ringe is not very optimistic that the letter will lead to any changes in the A.I. Act, saying that policymakers in Brussels are determined to push it through. In his opinion, the law slants too heavily to the side of regulation without enough attention to cultivating the blooming A.I. industry.

“We need to strike a balance between facilitation and creating a level playing field, but also promoting and encouraging innovation in this field,” the professor said. “My personal view is that the A.I. Act is rather geared towards addressing potential problems and perceiving risks everywhere, and erring on the side of caution.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *