Thursday, December 19, 2024
Business

AI could be the drunk uncle in health care—or fix our broken systems

Artificial intelligence (AI) may have its skeptics in healthcare systems around the world, but we can’t afford to ignore technologies that could alleviate the mounting pressures on struggling infrastructures.

From automating administrative tasks and assisting with clinical decisions to reducing wait times and interpreting scans, AI offers a path forward that allows physicians to spend more time with their patients while maintaining high standards of care.

To fix our broken health care systems, we can’t rely on the status quo. Progress requires stepping outside the norm—and building trust in AI as a vital tool to overcome these challenges.

AI’s promise

With ever-increasing demands on their time, health care professionals are at breaking point. Doctors now take on over 130,000 consultations in their careers, spending nearly 34% of their time on administrative tasks. And as populations grow, this demand will only rise, contributing to a predicted global shortfall of 10 million health care workers by 2030.

We need more health care professionals—or health care professionals with more time for patients. That’s where AI can help, by enhancing rather than replacing human capabilities, shouldering some of the routine tasks, and giving health care workers more time for the profoundly human aspects of their roles: building relationships and interacting with patients.

But it isn’t all about automating administrative tasks. By offering insights from vast medical knowledge and guiding health care professionals toward the best course of action, these tools can reduce errors and make health care smarter. And by promoting a shift toward a more proactive, preventive model of care, AI has the potential to reduce strain on health care systems.

How things went astray for AI in healthcare

There’s more than one answer to this question. But a key factor to consider is the margin for error that has emerged from some of the most popular AI tools, particularly black-box large language models (LLMs) like GPT-4.

Their introduction has generated much hype. Developers have been quick to capitalize on free access to vast amounts of data and tech-savvy doctors have been equally rapid in leveraging their seemingly limitless insights.

While the benefits of automating burdensome tasks with AI are clear, it’s important to tread carefully. Inevitably, some of these tools are regressing toward the mean. Once you play around with them enough, you begin to notice the flaws. It’s like a drunk uncle at a dinner party. While he might speak with confidence and seem to know what he’s talking about, after a while cracks appear and you realize most of what he is saying is nonsense. Do you trust what he says next time he comes around? Of course not. 

LLMs are only as good as the data they’re trained on—and the issues stem from the vast amounts of publicly available internet data many are using. In health care, this creates an inherent risk. An AI tool might offer a clinical recommendation based on credible research, but it also might offer clinical recommendations based on dubious advice from a casual blog. These inconsistencies have made health care professionals cautious of AI, fearing that inaccurate information could negatively impact patient care, and lead to serious repercussions.

Added to this, the regulatory environment around health care AI has been patchy, particularly in the U.S. where the framework has only recently started catching up with European standards. This created a window where some vendors were able to navigate around regulations, sourcing information from third parties and pointing the finger elsewhere when concerns about data quality and accountability arose.

Without strong regulatory frameworks, it’s difficult for health care professionals to feel confident that AI tools will adhere to the highest standards of data integrity and patient safety.

How we can fix it

Being provocative, the way to rebuild trust in health care AI is by being, quite frankly, more boring. Health care professionals are trained to rely on research, evidence, and proven methods, not magic. For AI to gain their trust, it needs to be transparent, thoroughly tested, and grounded in science.

This means AI providers being upfront about how our tools are developed, tested, and validated—sharing research, publishing papers, and being transparent about our processes and the hoops we have jumped through to create these tools, rather than selling them as some kind of silver bullet. And to do this, we need the right people in place, highly expert technicians and researchers capable of understanding the extremely complex and continually evolving LLMs we are playing with. People who can ask the right questions and set models up with the right guard rails to ensure we’re not putting the drunk uncle version of AI into production.

We also need to mandate that all health care AI tools are trained only on robust health care data rather than the unfiltered mass of internet content. As with any field, feeding programs with industry-specific data can only help to improve the accuracy and quality of information to record, process, and generate recommendations. These improvements are not only essential for patient safety but will also deliver insights that could improve future abilities to detect disease and personalize treatment plans to improve patient outcomes.

A solid regulatory framework will help to underpin efforts to improve data quality and markets are at last beginning to wake up to its importance. For health care organizations looking to invest in AI data processing tools, vendor adherence to regulatory standards like ISAE 3000, SOC2 Type 2, and C5 should be non-negotiable, reflecting respect for and commitment to data integrity.

And we can’t afford to be complacent. Being the most innovative also means being the most responsible. As AI continues its evolution, our community will need to actively engage in regulation to keep pace and safeguard against the potential overreach of generative AI technologies.

If we can get all of this right, the benefits of restoring trust in AI for health care are immense.

Ultimately, by addressing the trust gap in AI, we can unlock its potential to transform health care, making it more efficient, effective, and patient-centered.

More must-read commentary published by Fortune:

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

source

Leave a Reply

Your email address will not be published. Required fields are marked *