Friday, November 22, 2024
Technology

Zoom knots itself a legal tangle over use of customer data for training AI models

Three years ago Zoom settled with the FTC over a claim of deceptive marketing around security claims, having been accused of overstating the strength of the encryption it offered. Now the videoconferencing platform could be headed for a similar tangle in Europe in relation to its privacy small print.

The recent terms & conditions controversy sequence goes like this: A clause added to Zoom’s legalese back in March 2023 grabbed attention on Monday after a post on Hacker News claimed it allowed the company to use customer data to train AI models “with no opt out”. Cue outrage on social media.

Although, on closer inspection, some pundits suggested the no opt out applied only to “service generated data” (telemetry data, product usage data, diagnostics data etc), i.e. rather than everything Zoom’s customers are doing and saying on the platform.

Still, people were mad. Meetings are, after all, painful enough already without the prospect of some of your “inputs” being repurposed to feed AI models that might even — in our fast-accelerating AI-generated future — end up making your job redundant.

The relevant clauses from Zoom’s T&Cs are 10.2 through 10.4 (screengrabbed below). Note the bolded last line emphasizing the consent claim related to processing “audio, video or chat customer content” for AI model training — which comes after a wall of text where users entering into the contractual agreement with Zoom commit to grant it expansive rights for all other types of usage data (and other, non-AI training purposes too):

Zoom T&Cs clauses for using data for AI model training

Screengrab: Natasha Lomas/TechCrunch

Setting aside the obvious reputational risks sparked by righteous customer anger, certain privacy-related legal requirements apply to Zoom in the European Union where regional data protection laws are in force. So there are legal risks at play for Zoom, too.

The relevant laws here are the General Data Protection Regulation (GDPR), which applies when personal data is processed and gives people a suite of rights over what’s done with their information; and the ePrivacy Directive, an older piece of pan-EU legislation which deals with privacy in electronic comms.

Previously ePrivacy was focused on traditional telecoms services but the law was modified at the end of 2020, via the European Electronic Communications Code, to extend confidentiality duties to so-called over-the-top services such as Zoom. So Article 5 of the Directive — which prohibits “listening, tapping, storage or other kinds of interception or surveillance of communications and the related traffic data by persons other than users, without the consent of the users concerned” — looks highly relevant here.

Consent claim

Rewinding a little, Zoom responded to the ballooning controversy over its T&Cs by pushing out an update — including the bolded consent note in the screengrab above — which it also claimed, in an accompanying blog post, “confirm[s] that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent”.

Its blog post is written in the usual meandering corpspeak — peppered with claims of commitment to transparency but without Zoom clearly addressing customer concerns about its data use. Instead its crisis PR response wafts in enough self-serving side-chatter and product jargon to haze the view. The upshot is a post obtuse enough to leave a general reader still scratching their head over what’s actually going on. Which is called ‘shooting yourself in the foot’ when you’re facing a backlash trigged by apparently contradictory statements in your communications. It can also imply a company has something to hide.

Zoom wasn’t any clearer when TechCrunch contacted it with questions about its data-for-AI processing in an EU law context; failing to provide us with straight answers to queries about the legal basis it’s relying on for processing to train AI models on regional users’ data; or even, initially, to confirm whether access to the generative AI features it offers, such as an automated meeting summary tool, is dependent on the user consenting to their data being used as AI training fodder.

At this point its spokesperson just reiterated its line that: “Per the updated blog and clarified in the ToS — We’ve further updated the terms of service (in section 10.4) to clarify/confirm that we will not use audio, video, or chat Customer Content to train our artificial intelligence models without customer consent.” [emphasis its]

Zoom’s blog post, which is attributed to chief product officer Smita Hashim, goes on to discuss some examples of how it apparently gathers “consent”: Depicting a series of menus it may show to account owners or administrators; and a pop-up it says is displayed to meeting participants when the aforementioned (AI-powered) Meeting Summary feature is enabled by an admin.

In the case of the first group (admins/account holders) Hashim’s post literally states that they “provide consent”. This wording, coupled with what’s written in the next section — vis-a-vis meeting participants receiving “notice” of what the admins have enabled/agreed to — implies Zoom is treating the process of obtaining consent as something that can be delegated to an admin on behalf of a group of people. Hence the rest of the group (i.e. meeting participants) just getting “notice” of the admin’s decision to turn on AI-powered meeting summaries and give it the green light to train AIs on their inputs.

However the law on consent in the EU — if, indeed, that’s the legal basis Zoom is relying upon for this processing — doesn’t work like that. The GDPR requires a per individual ask if you’re claiming consent as your legal basis to process personal data.

As noted above, ePrivacy also explicitly requires that electronic comms be kept confidential unless the user consents to interception (or unless there’s some national security reason for the surveillance but Zoom training generative AI features doesn’t seem likely to qualify for that).

Back to Zoom’s blog post: It refers to the pop-up shown to meeting participants as “notice” or “notification” that its generative AI services are in use, with the company offering a brief explainer that: “We inform you and your meeting participants when Zoom’s generative AI services are in use. Here’s an example [below graphic] of how we provide in-meeting notification.”

Zoom Meeting Summary pop-up

Image credits: Zoom

Yet in its response to the data-for-AI controversy Zoom has repeatedly claimed it does not process customer content to train its AIs without their consent. So is this pop-up just a “notification” that its AI-powered feature has been enabled or a bona fide ask where Zoom claims it obtains consent from customers to this data-sharing? Frankly its description is not at all clear.

For the record, the text displayed on the notice pop-up reads* — and do note the use of the past tense in the title (which suggests data sharing is already happening):

Meeting Summary has been enabled.

The account owner may allow Zoom to access and use your inputs and AI-generated content for the purpose of providing the feature and for Zoom IQ product improvement, including model training. The data will only be used by Zoom and not by third parties for product improvement. Learn more

We’ll send the meeting summary to invitees after the meeting ends (based on the settings configured for the meeting). Anyone who receives the meeting summary may save and share it with apps and others.

AI-generated consent may be inaccurate or misleading. Always check for accuracy.

Two options are presented to meeting participants who see this notice. One is a button labelled “Got it!” (which is highlighted in bright blue so apparently pre-selected); the other is a button labelled “Leave meeting” (displayed in grey, so not the default selection). There is also a hyperlink in the embedded text where users can click to “learn more” (but, presumably, won’t be presented with additional options vis-a-vis its processing of their inputs).

Free choice vs free to leave…

Fans of European Union data protection law will be familiar with the requirement that for consent to be a valid legal basis for processing people’s data it must meet a certain standard — namely: It must be clearly informed; freely given; and purpose limited (specific, not bundled). Nor can it be nudged with self-serving pre-selections.

These folks might also point out that Zoom’s notice to meeting participants about its AI generated feature being activated does not provide them with a free choice to deny consent for their data to become AI training fodder. (Indeed, judging by the tense used, it’s already processing their info for that by the time they see this notice.)

This much is obvious since the meeting participant must either agree to their data being used by Zoom for uses including AI training or quit the meeting altogether. There are no other choices available. And it goes without saying that telling your users the equivalent of ‘hey, you’re free to leave‘ does not sum to a free choice over what you’re doing with their data. (See, for e.g.: The CJEU’s recent ruling against Meta/Facebook’s forced consent.)

Zoom is not even offering its users the ability to pay it to avoid this non-essential data-mining — which is a route some regional news publishers have taken by offering consent-to-tracking paywalls (where the choice offered to readers is either to pay for access to the journalism or agree to tracking to get free access). Although even that approach looks questionable, from a GDPR fairness point of view (and remains under legal challenge).

But the key point here is that if consent is the legal basis claimed to process personal data in the EU there must actually be a free choice available.

And a choice to be in the meeting or not in the meeting is not that. (Add to that, as a mere meeting participant — i.e. not an admin/account holder — such people are unlikely to be the most senior person in the virtual room — and withdrawing from a meeting you didn’t initiate/arrange on data ethics grounds may not feel available to that many employees. There’s likely a power imbalance between the meeting admin/organizer and the participants, just as there is between Zoom the platform providing a communications service and Zoom’s users needing to use its platform to communicate.)

As if that wasn’t enough, Zoom is very obviously bundling its processing of data for providing the generative AI feature with other non-essential purposes — such as product improvement and model training. That looks like a straight-up contravention of the GDPR purpose limitation principle, which would also apply in order for consent to be valid.

But all of these analyses are only relevant if Zoom is actually relying on consent as its legal basis for the processing, as its PR response to the controversy seems to claim — or, at least, it does in relation to processing customer content for training AI models.

Of course we asked Zoom to confirm its legal basis for the AI training processing in the EU but the company avoided giving us a straight answer. Funny that!

Pressed to justify its claim to be obtaining consent for such processing against EU law consent standards, a spokesman for the company sent us the following (irrelevant and/or misleading) bullet-points [again, emphasis its]:

  • Zoom generative AI features are default off and separately enabled by customers. Here’s the press release from June 5 with more details
  • Customers control whether to enable these AI features for their accounts and can opt out of providing their content to Zoom for model training at the time of enablement
  • Customers can change the account’s data sharing selection at any time
  • Additionally, for Zoom IQ Meeting Summary, meeting participants are given notice via a pop up when Meeting Summary is turned on. They can then choose to leave the meeting at any time. The meeting host can start or stop a summary at any time. More details are available here

So Zoom’s defence of the consent it claims to offer is literally that it gives users the choice to not use its service. (It should really ask how well that kind of argument went for Meta in front of Europe’s top court.)

Even the admin/account-holder consent flow Zoom does serve up is problematic. Its blog post doesn’t even explicitly describe this as a consent flow — it just couches it an example of “our UI through which a customer admin opts in to one of our new generative AI features”, linguistically bundling opting into its generative AI with consent to share data with it for AI training etc. 

In the screengrab Zoom includes in the blog post (which we’ve embedded below) the generative AI Meeting Summary feature is stated in annotated text as being off by default — apparently requiring the admin/account holder to actively enable it. There is also, seemingly, an explicit choice associated with the data sharing that is presented to the admin. (Note the tiny blue check box in the second menu.)

However — if consent is the claimed legal basis — another problem is that this data-sharing box is pre-checked by default, thereby requiring the admin to take the active step of unchecking it in order for data not to be shared. So, in other words, Zoom could be accused of deploying a dark pattern to try and force consent from admins.

Under EU law, there is also an onus to clearly inform users of the purpose you’re asking them to consent to.

But, in this case, if the meeting admin doesn’t carefully read Zoom’s small print — where it specifies the data sharing feature can be unchecked if they don’t want these inputs to be used by it for purposes such as training AI models — they might ‘agree’ by accident (i.e. by failing to uncheck the box). Especially as a busy admin might just assume they need to have this “data sharing” box checked to be able to share the meeting summary with other participants, as they will probably want to.

So even the quality of the ‘choice’ Zoom is presenting to meeting admins looks problematic against EU standards for consent-based processing to fly.

Add to that, Zoom’s illustration of the UI admins get to see includes a further small print qualification — where the company warns in fantastically tiny writing that “product screens subject to change”. So, er, who knows what other language and/or design it may have deployed to ensure it’s getting mostly affirmative responses to data-sharing user inputs for AI training to maximize its data harvesting.

Zoom UI for admins to enable generative AI features

Image credits: Zoom

But hold your horses! Zoom isn’t actually relying on consent as its legal basis to data-mine users for AI, according to Simon McGarr, a solicitor with Dublin-based law firm McGarr Solicitors. He suggests all the consent theatre described above is essentially a “red herring” in EU law terms — because Zoom is relying on a different legal basis for the AI data mining: Performance of a contract.

“Consent is irrelevant and a red herring as it is relying on contract as the legal basis for processing,” he told TechCrunch when we asked for his views on the legal basis question and Zoom’s approach more generally.

US legalese meets EU law

In McGarr’s analysis, Zoom is applying a US drafting to its legalese — which does not take account of Europe’s (distinct) framework for data protection.

“Zoom is approaching this in terms of ownership of personal data,” he argues. “There’s non personal data and personal data but they’re not distinguishing between those two. Instead they’re distinguishing between content data (“customer content data”) and what they call telemetry data. That is metadata. Therefore they’re approaching this with a framework that isn’t compatible with EU law. And this is what has led them to make assertions in respect of ownership of data — you can’t own personal data. You can only be either the controller or the processor. Because the person continues to have rights as the data subject.

“The claim that they can do what they like with metadata runs contrary to Article 4 of the GDPR which defines what is personal data — and specifically runs contrary to the decision in the Digital Rights Ireland case and a whole string of subsequent cases confirming that metadata can be, and frequently is, personal data — and sometimes sensitive personal data, because it can reveal relationships [e.g. trade union membership, legal counsel, a journalist’s sources etc].”

McGarr asserts that Zoom does need consent for this type of processing to be lawful in the EU — both for metadata and customer content data used to train AI models — and that it can’t actually rely on performance of a contract for what is obviously non-essential processing.

But it also needs consent to be opt in, not opt out. So, basically, no pre-checked boxes that only an admin can uncheck, and with nothing but a vague “notice” sent to other users that essentially forces them to consent after the fact or quit; which is not a free and unbundled choice under EU law.

“It’s a US kind of approach,” he adds of Zoom’s modus operandi. “It’s the notice approach — where you tell people things, and then you say, well, I gave them notice of X. But, you know, that isn’t how EU law works.”

Add to that, processing sensitive personal data — which Zoom is likely to be doing, even vis-a-vis “service generated data” — requires an even higher bar of explicit consent. Yet — from an EU law perspective — all the company has offered so far in response to the T&Cs controversy is obfuscation and irrelevant excuses.

Pressed for a response on legal basis, and asked directly if it’s relying on performance of a contract for the processing, a Zoom spokesman declined to provide us with an answer — saying only: “We’ve logged your questions and will let you know if we get anything else to share.”

The company’s spokesman also did not respond to questions asking it to clarify how it defines customer “inputs” for the data-sharing choice that (only) admins get — so it’s still not entirely clear whether “inputs” refers exclusively to customer comms content. But that does appear to be the implication from the bolded claim in its contract not to use “audio, video or chat Customer Content to train our artificial intelligence models without your consent” (note, there’s no bolded mention of Zoom not using customer metadata for AI model training).

If Zoom is excluding “service generated data” (aka metadata) from even its opt out consent it seems to believe it can help itself to these signals without applying even this legally meaningless theatre of consent. Yet, as McGarr points out, “service generated data” doesn’t get a carve out from EU law; it can and often is classed as personal data. So, actually, Zoom does need consent (i.e. opt in, informed, specific and freely given consent) to process users’ metadata too.

And let’s not forget ePrivacy has fewer available legal bases than the GDPR — and explicitly requires consent for interception. Hence legal experts’ conviction that Zoom can only rely on (opt in) consent as its legal basis to use people’s data for training AIs.

A recent intervention by the Italian data protection authority on OpenAI’s generative AI chatbot service, ChatGPT appears to have arrived at a similar view on use of data for AI model training — since the authority stipulated that OpenAI can’t rely on performance of a contract to process personal data for that. It said the AI giant would have to choose between consent or legitimate interests for processing people’s data for training models. OpenAI later resumed service in Italy having switched to a claim of legitimate interests — which requires it to offer users a way to opt out of the processing (which it had added).

For AI chatbots, the legal basis for model training question remains under investigation by EU regulators.

But, in Zoom’s case, the key difference is that for comms services it’s not just GDPR but ePrivacy that applies — and the latter doesn’t allow LI to be used for tracking.

Zooming to catch up

Given the relatively novelty of generative AI services, not to mention the huge hype around data-driven automation features, Zoom may be hoping its own data-mining for AI will fly quietly under international regulators’ radar. Or it may just be focused elsewhere.

There’s no doubt the company is feeling under pressure competitively — after what had, in recent years, been surging global demand for virtual meetings falling off a cliff since we passed the peak of COVID-19 and rushed back to in-person handshakes.

Add to that the rise of generative AI giants like OpenAI is clearly dialling up competition for productivity tools by massively scaling access to new layers of AI capabilities. And Zoom has only relatively recently made its own play to join the generative AI race, announcing it would dial up investment back in February — after posting its first fourth quarter net loss since 2018 (and shortly after announcing a 15% headcount reduction).

There’s also already no shortage of competition for videoconferencing — with tech giants like Google and Microsoft offering their own comms tool suites with videochatting baked in. Plus even more rivalry is accelerating down the pipes as startups tap up generative AI APIs to layer extra features on vanilla tools like videoconferencing — which is driving further commodification of the core platform component.

All of which is to say that Zoom is likely feeling the heat. And probably in a greater rush to train up its own AI models so it can race to compete than it is to send its expanded data sharing T&Cs for international legal review.

European privacy regulators also don’t necessarily move that quickly in response to emerging techs. So Zoom may feel it can take the risk.

However there’s a regulatory curve ball in that Zoom does not appear to be main established in any EU Member State.

It does have a local EMEA office in the Netherlands — but the Dutch DPA told us it is not the lead supervisory authority for Zoom. Nor does the Irish DPA appear to be (despite Zoom claiming a Dublin-based Article 27 representative).

“As far as we are aware, Zoom does not have a lead supervisory authority in the European Economic Area,” a spokesman for the Dutch DPA told TechCrunch. “According to their privacy statement the controller is Zoom Video Communications, Inc, which is based in the United States. Although Zoom does have an office in the Netherlands, it seems that the office does not have decision-making authority and therefore the Dutch DPA is not lead supervisory authority.”

If that’s correct, and decision-making in relation to EU users data takes place exclusively over the pond (inside Zoom’s US entity), any data protection authority in the EU is potentially competent to interrogate its compliance with the GDPR — rather than local complaints and concerns having to be routed through a single lead authority. Which maximizes the regulatory risk since any EU DPA could make an intervention if it believes user data is being put at risk.

Add to that, ePrivacy does not contain a one-stop-shop mechanism to streamline regulatory oversight as the GDPR does — so it’s already the case that any authority could probe Zoom’s compliance with that directive.

The GDPR allows for fines that can reach up to 4% of global annual turnover. While ePrivacy lets authority set appropriately dissuasive fines (which in the French CNIL’s case has led to several hefty multi-million dollar penalties on a number of tech giants in relation to cooking tracking infringements in recent years).

So a public backlash by users angry at sweeping data-for-AI T&Cs may cause Zoom more of a headache than it thinks.

*NB: The quality of the graphic on Zoom’s blog was poor with text appearing substantially pixellated, making it hard to pick-out the words without cross-checking them elsewhere (which we did)


source

Leave a Reply

Your email address will not be published. Required fields are marked *