Friday, November 22, 2024
Technology

Nabla, a French digital health startup, launches Copilot, using GPT-3 to turn patient conversations into actionable items

Healthcare has been pegged as a prime candidate for more AI applications — both to aid in clinical work and to lighten some of the more time-consuming administrative burdens that come around clinical care. Now, Nabla, the digital health startup out of Paris co-founded by AI entrepreneur Alexandre Lebrun, claims to be the first to build a tool using GPT-3 to help physicians do their work — more specifically, their paperwork.

Copilot, as Nabla’s new service is called, is launching today as a digital assistant for doctors accessed initially as a Chrome extension to help transcribe and repurpose information from video conversations, with plans for an in-person consultation tool to launch in a few weeks.

Following along as doctors see patients, Copilot automatically translates those conversations into different document-based endpoints — eg, prescriptions, follow up appointment letters, consultation summaries — that typically result from those meetings. It’s based around GPT-3, the language model built by OpenAI that is used to generate human text, which is powering hundreds of applications, including ChatGPT from OpenAI itself.

Nabla was one of the first companies to experiment with GPT-3 when it was released in 2020. While Nabla is currently using GPT-3 (as a paying customer) as the basis of Copilot, Lebrun tells me that the longer term goal, approaching fast, is to build its own large language model customized to the particular language and needs in medicine and healthcare, to power Copilot, whatever else Nabla builds in future, and potentially applications for others, too.

The early version already has some traction, the startup says: it’s in use by practitioners in the U.S. and France, as well as around 20 digital and in-person clinics “with significant medical teams.”

The jury is still out on what large-scale, long-term uses we’ll see for generative AI technologies — and whether they and the large language models that power them will provide net benefits or net losses to our world; and whether they will make any money in the process.

In the meantime, healthcare has been one of the big industries that people have been watching with interest to see how it responds to these developments, roughly down two corridors of development. First, where it could be used for clinical assistance, for example as described in this piece co-authored by Harvard Medical School doctors and academics on using ChatGPT to diagnose patients; and second, in automating for more repetitive functions, as illustrated in this Lancet piece on the future of discharge summaries.

A lot of that work is still very much in its early stages, not least because healthcare is particularly sensitive.

“With all large language models, there is a risk,” Lebrun said in an interview. “It’s incredibly powerful, but five percent of the time it will be completely wrong and you have no way to control that. But in healthcare we [literally] can’t live with a 5% error rate.”

Yet in many regards, healthcare seems like a prime area to be infused with AI: clinicians are oversubscribed with patients and burned out; globally we are facing a chronic shortage of doctors partly as a result of so many leaving the profession, and partly because of the work demanded of them. On top of seeing patients, they have to dedicate time to being administrators, with a lot of very specific and formal pieces of documentation to get through to record appointment data and plan what comes next demanded both by rules and regulations, but also patients themselves. Alongside all this, there are sometimes unfortunately instances of human error.

On the other side, though, a number of steps in medical care have already been digitized, paving the way for patients and clinicians being more open to using more digital tools to help with the rest.

That thinking was in part what motivated Alexandre LeBrun to start Nabla in the first place, and to target Copilot specifically first at helping physicians with administrative tasks — not examining or counseling patients, or other clinical work.

LeBrun has a history in building language-based applications. In 2013, he sold his startup VirtuOz, described back then as the “Siri for enterprise”, to Nuance to spearhead its development of digital assistant tech for businesses. He then founded and eventually sold his next startup, Wit.ai, to Facebook, where he and his team then worked on the social network’s foray into chatbots in Messenger. He then put in time at FAIR, Facebook’s AI research centre in Paris.

Those early tools for enterprises to interact with customers were largely pitched as marketing and customer loyalty aids, but Lebrun believed they could be applied in less fuzzy scenarios, too.

“We could already see, in 2018, how much time doctors were spending updating patient records, and we started to think that we could bring AI technology and [advanced] machine learning to healthcare in particular to help with that,” Lebrun said.

Interestingly, Lebrun didn’t mention this to me, but he would have made that observation at the same time that RPA, robotic process automation, was picking up momentum in the market.

RPA really brought automation in the enterprise to the front of people’s minds. But providing assistance to doctors in live consultations is a more complex matter than mechanising rote work. With a relatively finite set of language and subject variables at play in a doctor-patient consultation, it became an ideal scenario for an AI-based assistant to help.

Lebrun discussed the idea with Yann LeCun, who was his boss at the time and is still Facebook’s chief AI research scientist. LeCun endorsed his thinking, so Lebrun left, and LeCun became one of the the first investors in Nabla.

It took a couple more years for Nabla to disclose that and other funding — it’s raised nearly $23 million — which the startup held off announcing to coincide with its first product. That was a health Q&A “super app” for women that let them track different health-related questions, combine that information with other data, and appeared designed mainly as a vehicle to help it figure out what people were looking for in remote health interactions, and what could be built out of that.

This was followed up last year with a more generalized “health tech stack for patient engagement” which is interesting in that it played a little on the central metric of Lebrun’s earlier products: engagement.

You might be somewhat skeptical of a startup, aiming to fix something broken in healthcare, with no medical professionals among its founders: in addition to Lebrun, the other two are COO Delphine Groll, who previously led business development and communications for media groups; and CTO Martin Raison, who has worked with Lebrun since Wit.ai.

That was a sticking point for Lebrun, too, who told me he considered putting the venture on pause in its early days to go to medical school himself.

He opted not to, and to draw instead on feedback and information from doctors and other clinicians, and to hire them to work with the startup to help steer its roadmap, which is how it has now arrived at today’s standalone product, Copilot.

“Nabla Copilot is designed for clinicians who want to be on the cutting edge of medicine,” said Jay Parkinson, MD, MPH, and Chief Medical Officer at Nabla, in a statement. “As a physician, I know that doctors are always short on time and have better things to do than fill out the [electronic health record]. With Nabla’s super-powered clinical notes, doctors can now look their patients in the eye throughout the consultation, and make sure they remember every word they say by sending the encounter summary.” Parkinson, who joined the startup recently, is an entrepreneur himself, with his telehealth startup Sherpaa Health acquired by Crossover.

While improving AI has generally come to be predicated on ingesting ever-more information to train, that’s been a tricky part of the building of Copilot. The company has data-sharing opt-in throughout, with no data ever stored on its servers, as well as HIPAA and GDPR compliant. Those who do agree to share training information will have their data run through “pseudonymisation algorithms” built in house. And for now, there are no plans to build clinical assistants: no diagnoses suggestions, or anything else like it.

Lebrun said that was easier said than done. Nabla’s AI, while it was being built, kept trying to provide diagnostics automatically to its users, even when the engineers didn’t ask it to and tried to get it not to, Lebrun said.

“We don’t want to overstep and do diagnostics,” he said, “so we had to train our AI not to do that.”

That might be something, “a different product”, in the distant future, he said, but a lot more development, and fool-proofing would need to be achieved first.

“We don’t believe in chatbots for medicine,” he added. “We want to make doctors’ lives better by saving them time.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *