Friday, November 22, 2024
Uncategorized

Prolific raises $32M to train and stress-test AI models using its network of 120K people

AI, when it works well, can feel like magic, but all too often AI-based systems don’t work as they should: If the data used to train models is not deep, wide and reliable enough, any kind of curveball can send that AI in the wrong direction. A London startup called Prolific has built a system it believes can help head off that issue, by tapping a network of 120,000 human participants to inform and stress test AI models. And in a sign of demand for its services, Prolific has now raised some funding — £25 million ($32 million) — to expand its operations.

The round was co-led by Partech and Oxford Science Enterprises (OSE).

Prolific was founded in 2014 and already counts organizations like Google, Stanford University, the University of Oxford, King’s College London and the European Commission among its customers, using its network of participants to test new products, train AI systems in areas like eye tracking and determine whether their human-facing AI applications are working as their creators want them to. Up to now, it’s been revenue from users like these that have helped Prolific grow. In fact, the only money Prolific had raised prior to this round was a seed round of $1.4 million it got after going through YC. (Yes, it was profitable; no longer now that it’s taking VC money and investing in growth.)

“We’ve seen incredible traction recently, and have a huge opportunity in front of us so are taking on this new funding to supercharge our efforts and expand our product, and range of participants, much faster than we could have without it,” Phelim Bradley, the founder and CEO, said over email to TechCrunch.

The company was conceived initially not out of a specific need in the world of AI, but out of a general problem that researchers often face with panels for anything, something Bradley identified in his own academic work (his background, before Prolific, was in computational biology and physics).

In short, it’s a challenge to find comprehensive cross sections of people to respond to questions, and nearly impossible to do so in a timely manner. The recourse for many researchers is to work with third parties to source participants, but this has its own drawbacks, including an inability to verify individuals and select cross sections to ensure representative samples.

In AI, these same issues are especially acute: False or misleading data is the proverbial fly in the ointment that could make or break how AI systems work. Given how widely AI is being applied — or perhaps more accurately, how widely people hope to one day apply it.

The solution that Bradley identified to fix that was fairly straightforward in concept, if not actual execution: build a better way of sourcing panelists.

In the early days, he said, the company proactively approached people, going to events and other venues to find them. “We generally did things that ‘didn’t scale,’” Bradley said. “But after we reached a critical mass, we’ve had most participants discover us through word-of-mouth as a result of the positive user experience.” Paying those volunteer freelance participants a minimum of $6-$8 per hour, but typically more, Prolific says that it has paid out some $100 million to them to date.

He said that Prolific works to keep the pool of 120,000 users relatively even when it comes to demographics. And it also has built tools — including more than 300 filters based on census data and other sources — so that its customers can better tune what they are looking for.

The company ironically is not using AI itself to solve a critical problem in the world of AI. “We’re currently focusing on providing HI (Human Intelligence) to help improve AI,” he said.

And while there are a lot of synergies between what Prolific has built to address a need in the AI market, and wider needs in the world of research, there are no plans to expand its network to work beyond AI applications, Bradley said.

That said, it seems like a no-brainer that companies like Amazon (which provides Mechanical Turk to customers needing human testers), Nielsen and YouGov, not to mention big players in language model building like OpenAI, might try to move into this space. For now, companies like Attest and Scale AI are possibly its closest competitors.

“Prolific has built an incredibly powerful online platform for research,” said Omri Benayoun, general partner at Partech, in a statement. “Its roots in academia means that it has applied the highest standards to quality, while its technical expertise brings innovation that sets it apart from anything else out there. Where others rely on manual recruitment methods, Prolific has built a research infrastructure covering everything from the recruiting and vetting of participants to integration of research tools. Prolific is poised to conquer global leadership in academia and is also perfectly placed to aid the development of AI.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *