An AI Model Built With Healthcare in Mind
Healthcare has offered some of the most promising opportunities to leverage AI, from streamlining workflows to automating routine tasks to assisting in clinical decisions. At Leidos, we have been using large language models to support our health customers even before ChatGPT made it popular in the fall of 2022. Continued interest in healthcare AI has only climbed since the relatively recent availability of generative AI (genAI) models that can process and create text, speech, images, videos, and more.
But the excitement has also been tempered by the daunting challenges of applying AI to the complex, low-margin-for-error world of healthcare. Questions have arisen about security, bias, and the level of trust that can be placed in AI’s output.
Srini Iyer, senior vice president and chief technology officer for the Leidos Health & Civil Sector, and Ken Su, Google’s senior outbound product manager in the CloudAI for Healthcare Group, recently sat down to discuss these opportunities and challenges for healthcare AI. The two groups have been collaborating on a leading-edge healthcare AI application, enlisting a key strategy: building the application around a domain-specific genAI model specially developed for healthcare rather than a general model.
Domain-specific vs. foundational
Iyer described how Leidos wanted to focus on improving an existing Leidos application. That application, currently based of conventional AI tools, has shortened the process of obtaining approval for disability benefits significantly from months to days.
“But we wanted to compress the approval processing time it takes even further using genAI,” said Iyer. With this aim in mind, Leidos engaged Su’s team as collaborators. They decided to build the new application around Google’s Medical Pathways Language Model 2, or MedPaLM2—a healthcare-specific domain model.
Both the collaboration and choice of model proved to be successful. “In a span of about six weeks, we were able to go from a concept to a demo version of a full-stack genAI application,” Iyer said.
PODCAST: REDEFINING HEALTHCARE WITH TRUSTWORTHY AI
A healthcare-specific genAI platform offers several advantages over a general, or foundational, model, explained Su. Because it is trained from the beginning on vast amounts of medical data, it can respond to medical questions with greater accuracy and relevance compared to foundational models, and it is more scalable and less costly.
“Foundational models are great,” said Su, “but they don’t quite meet the needs of healthcare applications. We’re excited about where a domain-specific model can take us.”
Iyer offered a small, but telling, example: “A generic AI model might assume that anything labeled positive is good, but if you tested positive in a medical exam, there are reasons to be concerned. At the same time, nobody wants to be negative in the financial domain,” he explained.
Understanding the different meanings of words and other data in healthcare is one reason it’s critical to use a domain-specific model.
Srini Iyer
SVP, Health & Civil Sector CTO, Leidos
However, a healthcare-specific model would know that a positive result in a healthcare setting can signal an urgent problem. “Understanding the different meanings of words and other data in healthcare is one reason it’s critical to use a domain-specific model," Iyer added.
Baking in security and reliability
It’s well known that genAI models can “hallucinate,” that is, provide surprisingly wrong answers. Healthcare only raises the bar on the need to eliminate such errors, said Su. “Eighty-five percent accuracy is pretty good in most industries,” he explained. “That won’t cut it in healthcare.”
To build more trustworthiness into the Leidos application, the collaborators enabled its AI model with “retrieval augmentation generation,” or RAG, which allows the model to update and improve itself as new data becomes available without having to retrain the model.
“If a patient has diabetes, you’d want to make sure that anything the application does for that patient takes into account the latest diabetes guidelines,” said Su.
Iyer added that further protection comes from ensuring that a human remains in the loop in anything the AI application does. “That was critical for us,” he said. “You can't just let the model run on autopilot. We have a lot of people who go through the model’s output to verify and validate that it’s right.”
Human oversight of AI is part of the Framework for AI Resilience and Security, or FAIRS, developed by Leidos. FAIRS focuses on safe and ethical AI deployment and helps ensure that AI systems are resilient and secure. “We built this whole project on that framework,” said Iyer. “The security and reliability are baked in.”
The collaborators were also mindful of the potential problem of bias—that an AI model may treat different types of people differently if it has been trained on data that reflects existing or past biases.
Eighty-five percent accuracy is pretty good in most industries. That won’t cut it in healthcare.
Ken Su
Senior Outbound Product Manager, CloudAI for Healthcare, Google
“There used to be a lot of bias in the notes that clinicians would put in patient records, in terms of different patient demographics,” explained Su. “The model is trained on some of that data, so we have to make sure that doesn’t get into the model.”
Scaling for future demand
Leidos is now looking to scale up the application to handle a demanding production environment. “We process thousands of reports every week and look at millions of pages of documents,” said Iyer.
PODCAST: TACKLING OVERWORK IN HEALTHCARE: THE POWER OF AI
He added that although the project is only a demo now, it was built with scaling in mind—including the ability to process not just text, but also images, video, and audio.
Iyer is also already thinking about other applications in healthcare that can benefit from a domain-specific genAI model.
“The entire process of claims management is an opportunity to use AI to reduce waste and abuse,” he said. “I’d also like to see if genAI can address the issue of data interoperability and eventually provide clinical decision support. There are just so many use cases for AI in healthcare; we can leverage the technology in numerous ways.”