Friday, November 22, 2024
Technology

Don’t rush generative AI apps to market without tackling privacy risks, says UK watchdog

The UK’s data protection watchdog has fired its most explicit warning shot yet at generative AI developers — saying it expects them to address privacy risks before bringing their products to market.

In a blog post trailing remarks the Information Commissioner’s Office’s (ICO) exec director of regulatory risk, Stephen Almond, will make at a conference later today the watchdog has warned against developers rushing to adopt the powerful AI technology without proper due diligence on privacy and data protection risks.

“We will be checking whether businesses have tackled privacy risks before introducing generative AI — and taking action where there is risk of harm to people through poor use of their data. There can be no excuse for ignoring risks to people’s rights and freedoms before rollout,” Almond is slated to warn.

He will also instruct businesses operating in the UK market that they will need to show the ICO “how they’ve addressed the risks that occur in their context — even if the underlying technology is the same”.

This means the ICO will be considering the context associated with an application of generative AI technology, with likely greater compliance expectations for health apps using a generative AI API, for example, vs retail-focused apps. (tl;dr this sort of due diligence ain’t rocket science but don’t expect to hide behind ‘we’re just using OpenAI’s API so didn’t think we needed to consider privacy when we added an AI chatbot to enhance our sexual health clinic finder app’ type line… )

“Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the costs of their services. But they must not be blind to the privacy risks,” Almond will also say, urging developers to: “Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won’t upset customers or regulators.”

For a sense of what this type of risk could cost if improperly managed, the UK’s data protection legislation bakes in fines for infringements that can hit up to £17.5 million or 4% of the total annual worldwide turnover in the preceding financial year, whichever is higher.

A patchwork of AI rules

As an established regulatory body the ICO has been tasked with developing privacy and data protection guidance for use of AI under an approach the government set out in its recent AI white paper.

The government has said it favors a set of “flexible” principles and context-specific guidance produced being by sector-focused and cross-cutting watchdogs for regulating AI — such as the competition authority, financial conduct authority, Ofcom and indeed the ICO — rather than introducing a dedicated legislative framework for steering development of AI such as is on the table over the English Channel in the European Union.

This means there will be a patchwork of expectations emerging as UK watchdog develop and flesh out guidance in the coming weeks and months. (The UK’s Competition and Markets Authority announced a review of generative AI last month; while, earlier this month, Ofcom offered some thoughts on what generative AI might mean for the comms sector which included detail on how it’s monitoring developments with a view to assessing potential harms.)

Shortly after the UK white paper was published the ICO’s Almond published a list of eight questions he said generative AI developers (and users) should be asking themselves — including core issues like what their legal basis for processing personal data is; how they will meet transparency obligations; and whether they’ve prepared a data protection impact assessment.

Today’s warning is more explicit. The ICO is plainly stating that it expects businesses to not just take note of its recommendations but act on them. Any that ignore that guidance and seek to rush apps to market will be generating more regulatory risk for themselves — including the potential for substantial fines.

It also builds on a tech-specific warning the watchdog made last fall when it singled out so-called “emotion analysis” AIs as too risky for anything other than purely trivial use-cases (such as kids party games), warning this type of “immature” biometrics technology carries greater risks for discrimination than potential opportunities.

“We are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area,” the ICO wrote then.

While the UK government has signalled it doesn’t believe dedicated legislation or an exclusively AI-focused oversight body are needed to regulate the technology, it has, more recently, been talking up the need for AI developers to center safety. And earlier this month prime minister Rishi Sunak announced a plan to host a global summit on AI safety this fall, seemingly to focus on fostering research efforts. The idea quickly won buy-in from a number of AI giants.

source

Leave a Reply

Your email address will not be published. Required fields are marked *