OpenAI partners with Common Sense Media to collaborate on AI guidelines
OpenAI hopes to win the trust of parents — and policymakers — by partnering with organizations that work to minimize tech and media harms to kids, preteens and teens.
Case in point, OpenAI today announced a partnership with Common Sense Media, the nonprofit organization that reviews and ranks the suitability of various media and tech for kids, to collaborate on AI guidelines and education materials for parents, educators and young adults.
As a part of the partnership, OpenAI will work with Common Sense Media to curate “family-friendly” GPTs — chatbot apps powered by OpenAI’s GenAI models — in the GPT Store, OpenAI’s GPT marketplace, based on Common Sense’s rating and evaluation standards, OpenAI CEO Sam Altman says.
“AI offers incredible benefits for families and teens, and our partnership with Common Sense will further strengthen our safety work, ensuring that families and teens can use our tools with confidence,” Altman added in a canned statement.
The launch of the partnership comes after OpenAI said that it would participate in Common Sense’s new framework, launched in September, for ratings and reviews designed to assess the safety, transparency, ethical use and impact of AI products. Common Sense’s framework aims to produce a “nutrition label” for AI-powered apps, according to Common Sense co-founder and CEO James Steyer, toward shedding light on the contexts in which the apps are used and highlight areas of potential opportunity and harm against a set of “common sense” tenets.
In a press release, Steyer alluded to the fact that today’s parents remain generally less knowledgeable about GenAI tools — for example, OpenAI’s viral AI-powered chatbot ChatGPT — than younger generations. An Impact Research poll commissioned by Common Sense Media late last year found that 58% of students aged 12 to 18 have used ChatGPT compared to 30% of parents of school-aged children.
“Together, Common Sense and OpenAI will work to make sure that AI has a positive impact on all teens and families,” Steyer said in an emailed statement. “Our guides and curation will be designed to educate families and educators about safe, responsible use of [OpenAI tools like] ChatGPT, so that we can collectively avoid any unintended consequences of this emerging technology.”
OpenAI’s under pressure from regulators to show that its GenAI-powered apps, including ChatGPT, are an overall boon for society — not a detriment to it. Just last summer, the U.S. Federal Trade Commission opened an investigation into OpenAI over whether ChatGPT harmed consumers through its collection of data and publication of false statements on individuals. European data authorities have also expressed concern over OpenAI’s private information handling.
OpenAI’s tools, like all GenAI tools, tend to confidently make things up and get basic facts wrong. And they’re biased — a reflection of the data that was used to train them.
Kids and teens, aware of the tools’ limitations or no, are increasingly turning to them for help not only with schoolwork but personal issues. According to a poll from the Center for Democracy and Technology, 29% of kids report having used ChatGPT to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.