Thursday, November 21, 2024
Technology

Get a clue, says panel about buzzy AI tech: It’s being ‘deployed as surveillance’

Earlier today at a Bloomberg conference in San Francisco, some of the biggest names in AI turned up, including, briefly, Sam Altman of OpenAI, who just ended his two-month world tour, and Stability AI founder Emad Mostaque. Still, one of the most compelling conversations happened later in the afternoon, in a panel discussion about AI ethics.

Featuring Meredith Whittaker (pictured above), the president of the secure messaging app Signal; Credo AI co-founder and CEO Navrina Singh; and Alex Hanna, the Director of Research at the Distributed AI Research Institute, the three had a unified message for the audience, which was: don’t get so distracted by the promise and threats associated with the future of AI. It is not magic, it’s not fully automated, and — per Whittaker — it’s already intrusive beyond anything that most Americans seemingly comprehend.

Hanna, for example, pointed to the many people around the world who are helping to train today’s large language models, suggesting that these individuals are getting short shrift in some of the breathless coverage about generative AI in part because the work is unglamorous and partly because it doesn’t fit the current narrative about AI.

Said Hanna: “We know from reporting . . .that there is an army of workers who are doing annotation behind the scenes to even make this stuff work to any degree — workers who work with Amazon Mechanical Turk, people who work with [the training data company] Sama — in Venezuela, Kenya, the U.S., actually all over the world . . .They are actually doing the labeling, whereas Sam [Altman] and Emad [Mostaque] and all these other people who are going to say these things are magic — no. There’s humans. . . .These things need to appear as autonomous and it has this veneer, but there’s so much human labor underneath it.”

The comments made separately by Whittaker — who previously worked at Google, co-founded NYU’s AI Now Institute and was an adviser to the Federal Trade Commission — were even more pointed (and also impactful based on the audience’s enthusiastic reaction to them). Her message was that, enchanted as the world may be now by chatbots like ChatGPT and Bard, the technology underpinning them is dangerous, especially as power grows more concentrated by those at the top of the advanced AI pyramid.

Said Whittaker, “I would say maybe some of the people in this audience are the users of AI, but the majority of the population is the subject of AI . . .This is not a matter of individual choice. Most of the ways that AI interpolates our life makes determinations that shape our access to resources to opportunity are made behind the scenes in ways we probably don’t even know.”

Whittaker gave an example of someone who walks into a bank and asks for a loan. That person can be denied and have “no idea that there’s a system in [the] back probably powered by some Microsoft API that determined, based on scraped social media, that I wasn’t creditworthy. I’m never going to know [because] there’s no mechanism for me to know this.” There are ways to change this, she continued, but overcoming the current power hierarchy in order to do so is next to impossible, she suggested. “I’ve been at the table for like, 15 years, 20 years. I’ve been at the table. Being at the table with no power is nothing.”

Certainly, a lot of powerless people might agree with Whittaker, including current and former OpenAI and Google employees who’ve reportedly been leery at times of their companies’ approach to launching AI products.

Indeed, Bloomberg moderator Sarah Frier asked the panel how concerned employees can speak up without fear of losing their jobs, to which Singh — whose startup helps companies with AI governance —  answered: “I think a lot of that depends upon the leadership and the company values, to be honest. . . . We’ve seen instance after instance in the past year of responsible AI teams being let go.”

In the meantime, there’s much more that everyday people don’t understand about what’s happening, Whittaker suggested, calling AI “a surveillance technology.” Facing the crowd, she elaborated, noting that AI “requires surveillance in the form of these massive datasets that entrench and expand the need for more and more data, and more and more intimate collection. The solution to everything is more data, more knowledge pooled in the hands of these companies. But these systems are also deployed as surveillance devices. And I think it’s really important to recognize that it doesn’t matter whether an output from an AI system is produced through some probabilistic statistical guesstimate, or whether it’s data from a cell tower that’s triangulating my location. That data becomes data about me. It doesn’t need to be correct. It doesn’t need to be reflective of who I am or where I am. But it has power over my life that is significant, and that power is being put in the hands of these companies.”

Indeed, she added, the “Venn diagram of AI concerns and privacy concerns is a circle.”

Whittaker obviously has her own agenda up to a point. As she said herself at the event, “there is a world where Signal and other legitimate privacy preserving technologies persevere” because people grow less and less comfortable with this concentration of power.

But also, if there isn’t enough pushback and soon — as progress in AI accelerates, the societal impacts also accelerate — we’ll continue heading down a “hype-filled road toward AI,” she said, “where that power is entrenched and naturalized under the guise of intelligence and we are surveilled to the point [of having] very, very little agency over our individual and collective lives.”

This “concern is existential, and it’s much bigger than the AI framing that is often given.”

We found the discussion captivating; if you’d like to see the whole thing, Bloomberg has since posted it here.


Meredith Whittaker and more of the sharpest minds and professionals in cybersecurity will join us at TechCrunch Disrupt to discuss the biggest challenges in the industry today. Join us to hear from hackers, front-line defenders and security researchers draw on their firsthand knowledge and experience about the most critical threats and what can be done about them. Plus, you’ll find out how a tech giant works to keep billions of users safe, and you will meet the startups working to secure crypto, hardware and messaging for the masses. Join us there and take 25% off your Disrupt 2023 pass with promo code secure25 – buy here.

source

Leave a Reply

Your email address will not be published. Required fields are marked *