Sunday, December 22, 2024
Technology

Women in AI: Kristine Gloria of the Aspen Institute tells women to enter the field and ‘follow your curiosity’

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Kristine Gloria leads the Aspen Institute’s Emergent and Intelligent Technologies Initiative — the Aspen Institute being the Washington, D.C.-headquartered think tank focused on values-based leadership and policy expertise. Gloria holds a PhD in cognitive science and a Master’s in media studies, and her past work includes research at MIT’s Internet Policy Research Initiative, the San Francisco-based Startup Policy Lab and the Center for Society, Technology and Policy at UC Berkeley.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

To be frank, I definitely didn’t start my career in pursuit of being in AI. First, I was really interested in understanding the intersection of technology and public policy. At the time, I was working on my Master’s in media studies, exploring ideas around remix culture and intellectual property. I was living and working in D.C. as an Archer Fellow for the New America Foundation. One day, I distinctly remember sitting in a room filled with public policymakers and politicians who were throwing around terms that didn’t quite fit their actual technical definitions. It was shortly after this meeting that I realized that in order to move the needle on public policy, I needed the credentials. I went back to school, earning my doctorate in cognitive science with a concentration on semantic technologies and online consumer privacy. I was very fortunate to have found a mentor and advisor and lab that encouraged a cross-disciplinary understanding of how technology is designed and built. So, I sharpened my technical skills alongside developing a more critical viewpoint on the many ways tech intersects our lives. In my role as the director of AI at the Aspen Institute, I then had the privilege to ideate, engage and collaborate with some of the leading thinkers in AI. And I always found myself gravitating towards those who took the time to deeply question if and how AI would impact our day-to-day lives.

Over the years, I’ve led various AI initiatives and one of the most meaningful is just getting started. Now, as a founding team member and director of strategic partnerships and innovation at a new nonprofit, Young Futures, I’m excited to weave in this type of thinking to achieve our mission of making the digital world an easier place to grow up. Specifically, as generative AI becomes table stakes and as new technologies come online, it’s both urgent and critical that we help preteens, teens and their support units navigate this vast digital wilderness together.

What work are you most proud of (in the AI field)?

I’m most proud of two initiatives. First is my work related to surfacing the tensions, pitfalls and effects of AI on marginalized communities. Published in 2021, “Power and Progress in Algorithmic Bias” articulates months of stakeholder engagement and research around this issue. In the report, we posit one of my all-time favorite questions: “How can we (data and algorithmic operators) recast our own models to forecast for a different future, one that centers around the needs of the most vulnerable?” Safiya Noble is the original author of that question, and it’s a constant consideration throughout my work. The second most important initiative recently came from my time as head of Data at Blue Fever, a company on the mission to improve youth well-being in a judgment-free and inclusive online space. Specifically, I led the design and development of Blue, the first AI emotional support companion. I learned a lot in this process. Most saliently, I gained a profound new appreciation for the impact a virtual companion can have on someone who’s struggling or who may not have the support systems in place. Blue was designed and built to bring its “big-sibling energy” to help guide users to reflect on their mental and emotional needs.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

Unfortunately, the challenges are real and still very current. I’ve experienced my fair share of disbelief in my skills and experience among all types of colleagues in the space. But, for every single one of those negative challenges, I can point to an example of a male colleague being my fiercest cheerleader. It’s a tough environment, and I hold on to these examples to help manage. I also think that so much has changed in this space even in the last five years. The necessary skill sets and professional experiences that qualify as part of “AI” are not strictly computer science-focused anymore.

What advice would you give to women seeking to enter the AI field?

Enter in and follow your curiosity. This space is in constant motion, and the most interesting (and likely most productive) pursuit is to continuously be critically optimistic about the field itself.

What are some of the most pressing issues facing AI as it evolves?

I actually think some of the most pressing issues facing AI are the same issues we’ve not quite gotten right since the web was first introduced. These are issues around agency, autonomy, privacy, fairness, equity and so on. These are core to how we situate ourselves amongst the machines. Yes, AI can make it vastly more complicated — but so can socio-political shifts.

What are some issues AI users should be aware of?

AI users should be aware of how these systems complicate or enhance their own agency and autonomy. In addition, as the discourse around how technology, and particularly AI, may impact our well-being, it’s important to remember there are tried-and-true tools to manage more negative outcomes.

What is the best way to responsibly build AI?

A responsible build of AI is more than just the code. A truly responsible build takes into account the design, governance, policies and business model. All drive the other, and we will continue to fall short if we only strive to address one part of the build.

How can investors better push for responsible AI

One specific task, which I admire Mozilla Ventures for requiring in its diligence, is an AI model card. Developed by Timnit Gebru and others, this practice of creating model cards enables teams — like funders — to evaluate the risks and safety issues of AI models used in a system. Also, related to the above, investors should holistically evaluate the system in its capacity and ability to be built responsibly. For example, if you have trust and safety features in the build or a model card published, but your revenue model exploits vulnerable population data, then there’s misalignment to your intent as an investor. I do think you can build responsibly and still be profitable. Lastly, I would love to see more collaborative funding opportunities among investors. In the realm of well-being and mental health, the solutions will be varied and vast as no person is the same and no one solution can solve for all. Collective action among investors who are interested in solving the problem would be a welcome addition.

source

Leave a Reply

Your email address will not be published. Required fields are marked *