Friday, November 22, 2024
Technology

Women In AI: Irene Solaiman, head of global policy at Hugging Face

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Irene Solaiman began her career in AI as a researcher and public policy manager at OpenAI, where she led a new approach to the release of GPT-2, a predecessor to ChatGPT. After serving as an AI policy manager at Zillow for nearly a year, she joined Hugging Face as the head of global policy. Her responsibilities there range from building and leading company AI policy globally to conducting socio-technical research.

Solaiman also advises the Institute of Electrical and Electronics Engineers (IEEE), the professional association for electronics engineering, on AI issues, and is a recognized AI expert at the intergovernmental Organization for Economic Co-operation and Development (OECD).

Irene Solaiman, head of global policy at Hugging Face

Briefly, how did you get your start in AI? What attracted you to the field?

A thoroughly nonlinear career path is commonplace in AI. My budding interest started in the same way many teenagers with awkward social skills find their passions: through sci-fi media. I originally studied human rights policy and then took computer science courses, as I viewed AI as a means of working on human rights and building a better future. Being able to do technical research and lead policy in a field with so many unanswered questions and untaken paths keeps my work exciting.

What work are you most proud of (in the AI field)?

I’m most proud of when my expertise resonates with people across the AI field, especially my writing on release considerations in the complex landscape of AI system releases and openness. Seeing my paper on an AI Release Gradient frame technical deployment prompt discussions among scientists and used in government reports is affirming — and a good sign I’m working in the right direction! Personally, some of the work I’m most motivated by is on cultural value alignment, which is dedicated to ensuring that systems work best for the cultures in which they’re deployed. With my incredible co-author and now dear friend, Christy Dennison, working on a Process for Adapting Language Models to Society was a whole of heart (and many debugging hours) project that has shaped safety and alignment work today.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

I’ve found, and am still finding, my people — from working with incredible company leadership who care deeply about the same issues that I prioritize to great research co-authors with whom I can start every working session with a mini therapy session. Affinity groups are hugely helpful in building community and sharing tips. Intersectionality is important to highlight here; my communities of Muslim and BIPOC researchers are continually inspiring.

What advice would you give to women seeking to enter the AI field?

Have a support group whose success is your success. In youth terms, I believe this is a “girl’s girl.” The same women and allies I entered this field with are my favorite coffee dates and late-night panicked calls ahead of a deadline. One of the best pieces of career advice I’ve read was from Arvind Narayan on the platform formerly known as Twitter establishing the “Liam Neeson Principle”of not being the smartest of them all, but having a particular set of skills.

What are some of the most pressing issues facing AI as it evolves?

The most pressing issues themselves evolve, so the meta answer is: International coordination for safer systems for all peoples. Peoples who use and are affected by systems, even in the same country, have varying preferences and ideas of what is safest for themselves. And the issues that arise will depend not only on how AI evolves, but on the environment into which they’re deployed; safety priorities and our definitions of capability differ regionally, such as a higher threat of cyberattacks to critical infrastructure in more digitized economies.

What are some issues AI users should be aware of?

Technical solutions rarely, if ever, address risks and harms holistically. While there are steps users can take to increase their AI literacy, it’s important to invest in a multitude of safeguards for risks as they evolve. For example, I’m excited about more research into watermarking as a technical tool, and we also need coordinated policymaker guidance on generated content distribution, especially on social media platforms.

What is the best way to responsibly build AI?

With the peoples affected and constantly re-evaluating our methods for assessing and implementing safety techniques. Both beneficial applications and potential harms constantly evolve and require iterative feedback. The means by which we improve AI safety should be collectively examined as a field. The most popular evaluations for models in 2024 are much more robust than those I was running in 2019. Today, I’m much more bullish about technical evaluations than I am about red-teaming. I find human evaluations extremely high utility, but as more evidence arises of the mental burden and disparate costs of human feedback, I’m increasingly bullish about standardizing evaluations.

How can investors better push for responsible AI?

They already are! I’m glad to see many investors and venture capital companies actively engaging in safety and policy conversations, including via open letters and Congressional testimonies. I’m eager to hear more from investors’ expertise on what stimulates small businesses across sectors, especially as we’re seeing more AI use from fields outside the core tech industries.

source

Leave a Reply

Your email address will not be published. Required fields are marked *