April 13, 2021 at 05:42PM

With an impending healthcare worker shortage and a relatively high rate of burnout among physicians and nurses, there’s a significant buzz around artificial intelligence (AI) and its ability to take some tedious tasks off clinicians’ plates, such as maintaining electronic health records (EHR). Furthermore, AI-based medical devices will be key to enabling greater care-at-home options for patients going forward. A recent Stanford study, however, shines a light on some issues with U.S. Food and Drug Administration (FDA) approval of AI-based devices and calls for caution moving forward.

The study, which was published in the journal Nature earlier this month, took a look at all medical AI devices approved by the FDA between January 2015 and December 2020. Of the 130 devices deployed during this time, the researchers found that only four of them had undergone prospective studies, while the other 126 were only subject to retrospective studies. In research, a retrospective looks back at data collected during a brief testing period, while prospective studies follow the deployment of the device (or drug, or technique) in the real world, measuring efficacy and watching for potential issues. Prospective studies, say the researchers, are especially important in evaluating AI tools used for aiding the decision-making process, as use in a clinical setting could differ from more predictable environments like test labs.

“For most devices, the test data for the retrospective studies were collected from clinical sites before evaluation, and the endpoints measured did not involve a side-by-side comparison of clinicians’ performances with and without AI,” write the researchers. “More prospective studies are needed for full characterization of the impact of the AI decision tool on clinical practice, which is important, because human–computer interaction can deviate substantially from a model’s intended use.”

The Stanford study also revealed that 93 of the devices approved by the FDA did not include information about multi-site testing. Of the 41 that did report this information, four devices were tested at only one site and eight were tested at only two sites. According to the researchers, this suggests that a substantial proportion of all the devices approved could have been tested only in limited areas. This is an issue because without widespread geographic testing, bias can be built into the study, reducing its effectiveness. Furthermore the researchers found that 45 percent of the devices approved did not contain sample size information, and of the 71 that did, the median size was only 300.

“The number of approvals for AI devices has increased rapidly in the past 5 years, with over 75% of approvals coming in the past 2 years and over 50% coming in the past year,” the researchers wrote. “However, the proportion of approvals with multi-site evaluation and reported sample size has remained stagnant during the same period of time.”

FDA Action

For its part, the FDA is aware of the issues with their current approval process. In April 2019, the agency put out a discussion paper proposing a new way to evaluate AI and machine-learning software. As a result of feedback from stakeholders in the software as medical device (SaMD) sector, the group released its first Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan in January of this year. It takes direct aim at the lack of prospective data collected in AI medical device evaluations and points to the fact that the government intends to take a more active role in ensuring proper studies are executed going forward.

“This action plan outlines the FDA’s next steps towards furthering oversight for AI/ML-based SaMD,” said Bakul Patel, director of the Digital Health Center of Excellence in the Center for Devices and Radiological Health (CDRH). “The plan outlines a holistic approach based on total product lifecycle oversight to further the enormous potential that these technologies have to improve patient care while delivering safe and effective software functionality that improves the quality of care that patients receive. To stay current and address patient safety and improve access to these promising technologies, we anticipate that this action plan will continue to evolve over time.”

 

Read More On Healthcare:

Report: AI Medical Devices Slipping Past Slack FDA Approvals …

Selected by EFXA

Search Web: Report: AI Medical Devices Slipping Past Slack FDA Approvals

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>