(AI) tools must be easy to use by physicians who are busy taking care of patients, without adding time or interrupting their regular clinical workflows.Saeed Hassanpour, PhD
Computer scientists are finding new ways to create artificial intelligence (AI) to assist in the healthcare setting. At Dartmouth’s and Dartmouth-Hitchcock’s Norris Cotton Cancer Center (NCCC), a biomedical informatics team led by Saeed Hassanpour, PhD, in collaboration with the Department of Pathology and Laboratory Medicine have developed several powerful deep learning models. These include tools that can classify lung cancer subtypes, predict the likelihood that a high-risk type of breast lesion is actually cancerous and locate cancerous esophagus tissue without time-consuming manual efforts. While these models have proven their accuracy through careful validation methods, they have not been put to the test in an actual clinic setting by practicing physicians with real-time patient data.
“AI may be reliably accurate, but will be useless in the clinic if the tools are too complex to understand by anyone who is not a computer scientist,” says Hassanpour. “The tools must be easy to use by physicians who are busy taking care of patients, without adding time or interrupting their regular clinical workflows.”
Ease of use in the clinic was a challenge Hassanpour’s team eagerly took on. In 2020, the team trained a computer “deep neural network” to classify colorectal polyps that are removed during screening colonoscopy into four major types based on their risk of progressing to cancer. The information provided by the algorithm tells physicians whether their patient is at low, medium or high risk of the polyps developing into colorectal cancer. Knowing risk helps physicians and patients decide on the best course of action for them.
The model not only produced results that demonstrated accuracy and sensitivity at the level of practicing pathologists, but withstood evaluation using broad datasets spanning multiple institutions across the U.S. “This study was one of the first to show a deep neural network that is generalizable to data from multiple medical centers,” says Hassanpour, who is also a member of NCCC’s Cancer Population Sciences Research Program. “A challenge in the field of deep learning for medical image analysis is collecting widespread data. Here, we had access to 238 slides from 24 institutions across 13 U.S. states, which let us show that the AI models that we train are broadly generalizable.”
While encouraged by the success of the 2020 study, those results were based on retrospective data and performed offline—a necessary first step in testing. Hassanpour’s team was now ready to bring this promising technology from bench to bedside by getting it in the hands of practicing pathologists in a live clinic setting.
Putting it to the test
Hassanpour’s team stood up a clinical trial, involving 15 pathologists from Dartmouth-Hitchcock Medical Center (DHMC) and Cheshire Medical Center in Keene, New Hampshire. The trial compared performance of the deep learning model as part of an AI-augmented digital system to pathologists’ standard use of a microscope to carry out the same polyp classification task.
Before using the AI-augmented digital system, pathologists watched a five-minute training video, read a brief summary of how the model works and how its results are generated, and practiced using a set of 10 sample slides to become familiar with the system.
Dartmouth-Hitchcock pathologist Arief A. Suriawinata, MD, served as chief pathologist collaborator on this project. “The AI-augmented digital system developed by Saeed’s team was able to assist pathologists in making classification of colon polyps a breeze,” says Suriawinata.
Results are in
Results of the trial showed that the AI system significantly improved accuracy compared to the microscope method. The average time of evaluation across all pathologists while using the digital system also dropped consistently. In contrast, reading time did not change significantly in microscope use assessment, a tool with which pathologists have many years of experience.
Overall, the average System Usability Scale Score for the digital system indicated that the usability was "good," which Hassanpour says is encouraging considering the system's short training and use period. Pathologists also stated that the digital system was "easy to use and navigate," "intuitive to use," and that it "pans in and out quickly and smoothly."
Notably, half of the participating pathologists reported that they would use a version of this tool in clinical practice. Twelve out of 15 commented that their experience either positively changed or supported their positive opinions for the role of AI in clinical practice.
“In the future, we will develop AI-augmented digital pathology systems and use them to be more efficient, accurate and standardized,” says Suriawinata, who notes these systems will potentially help many pathologists in diagnosing patient samples.
Hassanpour’s team is now working with a leading digital pathology startup company to bring their technology to market for regular use in clinical practice. The system shows promise in preventing cancer development with improved surveillance recommendations and accuracy, cutting costs, eliminating stress to patients and ultimately reducing overall colorectal cancer deaths.