Tuesday, November 5, 2024
Technology

Cinder’s content moderation software is custom-built for trust and safety teams

A startup from former Meta employees aims to streamline the content moderation process for companies grappling with some of the internet’s most complex, dangerous challenges.

Cinder, which likes to describe itself as an “operating system for trust and safety,” boasts years of combined experience in security and content policy work. That includes CEO and co-founder Glen Wise, a former red team engineer at Meta; Philip Brennan and Declan Cummings, who worked on threat intelligence at Meta; and Brian Fishman, the former director of Facebook’s counterterrorism and dangerous organizations team.

Cinder is backed by venture firm Accel, which led a $4 million seed round in May and a $10 million Series A this month, with participation from Y Combinator. The company was created in January of this year and has raised a total of $14 million to date.

Speaking with TechCrunch, Wise likened Cinder to business software platforms like Salesforce and Zendesk that can pull scattered sets of data together into a single user interface. But with Cinder, instead of sales or customer service teams tracking and sifting data, content moderators and other members of a company’s trust and safety teams can centralize a sensitive workflow into one purpose-built tool.

Wise says that companies without robust workflows in place for trust and safety right now awkwardly rely on systems that were built for different use cases, like bug tracking.

“I started talking to a lot of heads of trust and safety [and] a lot of other large internet companies, honestly a lot outside of social media as well, and they were struggling with operationally how to solve a lot of this, Wise told TechCrunch. “They would hire really smart people but they’d be stuck in spreadsheets, they’d be stuck in SQL statements and be stuck in this kind of world of the past, because they had no tools custom built for this.”

If a content moderator using one of those systems wants to review a social media post, for example, they’d have to leave that environment and follow a URL to view the content in question before coming back into the bug tracking tool to view any relevant context and provide their input.

“Then like an hour and a half later, they can actually make a decision,” Wise said. “And so we want to do all of that out of the box.” Wise says that so far Cinder mostly appeals to large companies that have established trust and safety operations or ones in the process of figuring out what those workflows should look like.

For social networks and other companies hosting user-generated content, the threats that trust and safety teams face are as complex as they are varied. Companies building out trust and safety must weave together expertise on everything from targeted harassment and hate groups to child sexual abuse material (CSAM) and state-sponsored influence operations.

Making moderation decisions in those areas can be as simple as flagging a single text post that uses a racial slur — a decision that might only take a few seconds — or as nuanced as linking together hundreds of accounts operating a covert government-sponsored influence campaign over the course of many months.

While some content is outright illegal, with official detection and reporting workflows to reflect that, most enforcement decisions fall to private companies to sort out. And as we’ve seen time and time again in recent years, the rules that define what content is allowed online and what content isn’t are living documents that respond, double back and evolve in real time.

Cinder aims to centralize those policies and the necessary context, enabling straightforward decision making at scale while still facilitating “dynamic investigations” — the kind of work that a disinformation campaign or a coordinated effort to undermine an election might require. The platform doesn’t do any detection itself and it doesn’t set the rules, that’s all up to the companies that license its software. (For now, Cinder isn’t naming any of its clients.)

Wise also notes that because Cinder was designed for the content review process, the company has been able to accommodate the human element of some of the web’s most emotionally demanding work, building for those needs “from the ground up.” That includes emerging industry-standard practices like blurring and grayscaling or prompting moderators to take a break after viewing something particularly challenging. Those interventions and others can mitigate what might otherwise be lasting harm to content reviewers.

“I think a lot of people understand how hard of a job it is to be a moderator and to look at a lot of this content,” Wise told TechCrunch. “What’s been really rewarding about this is that companies are really trying to invest in safety measures and are looking to a third party to help provide them.”

Beyond moderator safety concerns, Wise says Cinder is built with security and privacy considered at every level, which comes naturally to a team with a strong security pedigree. He describes a “robust” permissioning system within the software that makes it possible to obscure sensitive identifying information or other data easily, allowing different levels of access to pre-defined groups.

“We have a few customers on board and using the product and they’re really happy about it,” Wise said. “People never really had the tools built for this… and people are really excited to see that, okay, there’s a company of people who’ve done this before, who can help build for the exact use case, which has been really great.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *