Sunday, December 22, 2024
Uncategorized

Consumer group calls on EU to urgently investigate ‘the risks of generative AI’

European regulators are at a crossroads over how AI will be regulated — and ultimately used commercially and non-commercially — in the region, and today the EU’s largest consumer group, the BEUC, weighed in with its own position: stop dragging your feet, and “launch urgent investigations into the risks of generative AI” now, it said.

“Generative AI such as ChatGPT has opened up all kinds of possibilities for consumers, but there are serious concerns about how these systems might deceive, manipulate and harm people. They can also be used to spread disinformation, perpetuate existing biases which amplify discrimination, or be used for fraud,” said Ursula Pachl, Deputy Director General of BEUC, in a statement. “We call on safety, data and consumer protection authorities to start investigations now and not wait idly for all kinds of consumer harm to have happened before they take action. These laws apply to all products and services, be they AI-powered or not and authorities must enforce them.”

The BEUC, which represents consumer organizations in 13 countries in the EU, issued the call to coincide with a report out today from one of its members, Forbrukerrådet in Norway.

That Norwegian report is unequivocal in its position: AI poses consumer harms (the title of the report says it all: “Ghost in the Machine: addressing the consumer harms of generative AI”) and poses numerous problematic issues.

While some technologists have been ringing alarm bells around AI as an instrument of human extinction, the debate in Europe has been more squarely around the impacts of AI in areas like equitable service access, disinformation, and competition.

It highlights, for example, how “certain AI developers including Big Tech companies” have closed off systems from external scrutiny making it difficult to see how data is collected or algorithms work; the fact that some systems produce incorrect information as blithely as they do correct results, with users often none the wiser about which it might be; AI that’s built to mislead or manipulate users; the bias issue based on the information that is fed into a particular AI model; and security, specifically how AI could be weaponized to scam people or breach systems.

Although the release of OpenAI’s ChatGPT has definitely placed AI and the potential of its reach into the public consciousness, the EU’s focus on the impact of AI is not new. It stated debating issues of “risk” back in 2020, although those initial efforts were cast as groundwork to increase “trust” in the technology.

By 2021, it was speaking more specifically of “high risk” AI applications, and some 300 organizations banded together to weigh in to advocate to ban some forms of AI entirely.

Sentiments have become more pointedly critical over time, as the EU works through its region-wide laws. In the last week, the EU’s competition chief, Margarethe Vestager, spoke specifically of how AI posed risks of bias when applied in critical areas like financial services such as mortgages and other loan applications.

Her comments came just after the EU approved its official AI Law, which provisionally divides AI applications into categories like unacceptable, high and limited risk, covering a wide range of parameters to determine which category they fall into.

The AI Law, when implemented, will be the world’s first attempt to try to codify some kind of understanding and legal enforcement around how AI is used commercially and non-commercially.

The next step in the process is for the EU to engage with individual countries in the EU to hammer out what final form the law will take — specifically to identify what (and who) would fit into its categories, and what will not. The question will be in how readily different countries agree together. The EU wants to finalize this process by the end of this year, it said.

“It is crucial that the EU makes this law as watertight as possible to protect consumers,” said Pachl in her statement. “All AI systems, including generative AI, need public scrutiny, and public authorities must reassert control over them. Lawmakers must require that the output from any generative AI system is safe, fair and transparent for consumers.”

The BEUC is known for chiming in in critical moments, and for making influential calls that reflect the direction that regulators ultimately take. It was an early voice, for example, against Google in the long-term antitrust investigations against the search and mobile giant, chiming in years before actions were taken against the company. That example, though, underscores something else: the debate over AI and its impacts, and the role regulation might play in that, will likely be a long one.

source

Leave a Reply

Your email address will not be published. Required fields are marked *