Microsoft, ex-Google CEO back startup that aims to make AI systems work as humans intended
Artificial intelligence software doesn’t always do what the people building it want it to do — a potentially dangerous issue that has consumed some of the largest companies working on the technology.
Big companies like OpenAI and Alphabet Inc.’s Google are increasingly directing workers, money and computing power toward the problem. And Anthropic, an OpenAI competitor, has put it at the heart of its development of Claude, a product it bills as a safer kind of AI chatbot.
Starting this month, a new company called Synth Labs is also taking aim at the issue. Founded by a handful of prominent AI industry names, the company is emerging from stealth this week, and has raised seed funding from Microsoft Corp.’s venture capital fund, M12, and Eric Schmidt’s First Spark Ventures. Synth Labs is primarily focused on building software, some of it open source, to help a range of companies ensure that their AI systems act according to their intentions. It’s positioning itself as a company that’s working transparently and collaboratively.
Alignment, as the issue is sometimes called, represents a technical challenge for AI applications such as chatbots that are built atop large language models, which are typically trained on huge swaths of internet data. The effort is complicated by the fact that people’s ethics and values — as well as their ideas of what AI should and should not be permitted to do — vary. Synth Labs’ products will aim to help steer and customize large language models, particularly models that are themselves open source.
The company got its start as a project within nonprofit AI research lab EleutherAI, worked on by two of the three founders — Louis Castricato and Nathan Lile — plus Synth Labs advisor and EleutherAI executive director Stella Biderman. Francis deSouza, former chief executive officer of biotechnology company Illumina Inc., is also a founder. Synth Labs declined to say how much money it had raised so far.
Over the last few months the startup has built tools that can readily evaluate large language models on complex topics, Castricato said. The goal, he said, is to democratize access to easy-to-use tools that can automatically evaluate and align AI models.
A recent research paper that Castricato, Lile and Biderman co-authored gives a sense for the company’s approach: The authors used responses to prompts, which were generated by OpenAI’s GPT-4 and Stability AI’s Stable Beluga 2 AI models, to create a dataset. That dataset was then used as part of an automated process to direct a chatbot to avoid talking about one topic and instead talk about another.
“The way that we’re thinking about designing some of these early tools really is all about giving you the opportunity to decide what alignment means for your business or your personal preferences,” Lile said.