It's Time To Prescribe Frameworks For AI-Driven Health Care
In this Law360 article, partners Kate Hardey and Robert Kantrowitz and associate Micah Desaire discuss new opportunities and legal and regulatory challenges as healthcare providers begin to adopt AI in clinical settings.
The health care industry stands to gain significantly from the surge of innovation and interest in artificial intelligence and machine learning.
AI and ML have the potential to transform all aspects of patient care and clinical decision making. AI tools such as predictive analytics, natural language processing and ML algorithms can enhance diagnostic accuracy, streamline drug and device development, improve health outcomes, personalize care delivery, and assist with patient monitoring.
AI and ML may soon become essential aspects of clinical decision making. In a recent American Hospital Association-affiliated survey, 48% of hospital CEOs and strategy leaders expressed confidence that health systems will be positioned to utilize AI in clinical decision making by 2028.[1]
As health care providers begin to adopt AI in clinical settings, new legal and regulatory challenges are emerging. Key considerations of AI and ML tools include the U.S. Food and Drug Administration's classification and regulation of AI and ML products, in addition to issues related to fraud, waste, abuse and assigning liability in claims.
A clear legal and regulatory framework is essential to balance innovation with maintaining patient safety and other ethical standards.
The Promise of AI in Clinical Decision Making
Many are optimistic that AI will enhance health care delivery, improving everything from managing clinical paperwork to enhancing providers' ability to diagnose and treat illnesses.
AI has already shown it can support health care providers with their administrative functions, such as natural language processing in clinical documentation and generative AI in patient outreach, and clinical functions, e.g., predictive analytics in patient outcomes, image analysis, and ML algorithms used in diagnostics and planning treatments.
The increased use of AI in clinical decision making may require a reexamination of providers' responsibilities and competencies in relation to the use of this technology.
A research letter published in July found that clinical notes written by ChatGPT were largely indistinguishable from those written by medical residents.[2] The University of Texas is now offering a dual degree program, wherein students can simultaneously earn their doctor of medicine and a master of science in AI.[3]
State laws defining the practice of medicine may never sanction an AI application functioning independently of a human health care provider as a matter of public policy, for the simple reason that AI does not possess human characteristics like good moral character, honesty, conscience and experience, all of which are prerequisites for practicing medicine.
Nonetheless, providers may soon be expected to integrate AI tools into their clinical decision making processes as a complement or "second opinion" alongside their human expertise.
Stakeholders must pay attention to existing and developing regulatory frameworks around the use of AI, particularly as it relates to clinical decision making.
Currently, there is no singular regulatory framework for monitoring the safety and efficacy of AI applications used by providers in the health care space; however, the FDA has taken the lead in early regulatory efforts regarding the use and development of AI and ML.
FDA Regulations and Product Development Considerations
The FDA is continuously evaluating its approach to, and understanding of, the use of AI and ML.[4]
In 2021, more than 100 drug and biologic applications submitted to the FDA included AI or ML components.[5] Over the last several years, the FDA has similarly cleared hundreds of AI and ML enabled medical devices used in radiology, neurology, ophthalmic and cardiology settings.[6]
While AI and ML are rapidly transforming the drug and device development landscape, these innovations must meet appropriate standards for safety and security without interfering with a product's ability to meet applicable FDA standards for the drug or device.
To keep pace with the regulatory oversight challenges presented by AI and ML, the FDA developed the Digital Health Center of Excellence to engage stakeholders in the development of policy and regulatory oversight of digital health products, medical device software, clinical decision support software, device cybersecurity and much more.
There are several FDA guidance documents for the use of AI and ML in drug and device development.
Using AI and ML in the Development and Manufacturing of Drug and Biological Products
In the spring, the FDA published two discussion papers for public comment related to various uses of AI and ML in manufacturing processes, such as monitoring equipment performance metrics to address manufacturing deviations and using AI and ML to support product quality testing.[7]
In 2021, the FDA issued draft guidance addressing the use of digital health technology[8] for remote data acquisition through mobile phones or smart watches in clinical investigations. In the future, the FDA may also address how AI can be used to design and test the potential outcome of a clinical trial protocol.
Marketing Submission Regulations for a Predetermined Change Control Plan for AI/ML-Enabled Device Software Functions
In April, the FDA issued this draft guidance to assist manufacturers of ML-enabled device software, or SaMD, to evaluate the effect of various device modifications.
Importantly, the FDA recognizes that part of SaMD's advantage is improving performance through iterative modifications. Predetermined change control plans may greatly benefit companies by allowing manufacturers to obtain premarket authorization for pre-specified automatic and manual modifications that may be made to a SaMD without resubmitting the device for FDA review.
Medical Device Cybersecurity
All devices that meet the definition of a cyberdevice must meet certain cybersecurity requirements.[9] In September, the FDA published final guidance addressing quality system considerations and premarket submission content for medical device cybersecurity.[10]
This guidance applies to both devices and drug-device and biologic-device combination products with cybersecurity considerations, such as devices that contain software or firmware or programmable logic. The guidance document outlines important considerations for software validation, risk analysis, security objectives integrated into device design — e.g., threat modeling — and interoperability considerations.
Clinical Decision Support Software
In September 2022, the FDA published final guidance on the regulation of clinical decision support software. This guidance sets forth the specific criteria that must be met for clinical decision support software to be excluded from device regulation.
There are now four specific criteria that must be met to qualify as "non-device" clinical decision software. Significantly, the FDA's final guidance substantially narrows the types of software that qualify as non-device software meaning many of these products are subject to FDA regulation as a medical device.
In addition to the use of AI and ML for drug and device development, AI tools can be used for products on the market by recommending certain medications and treatment protocols. In addition to guidance and regulation at the federal level, as the use of AI and ML continues to revolutionize treatment options and treatment decisions, it is likely additional federal and state laws will be enacted to further address consumer safety and protection.
Potential for Increasing Fraud, Waste and Abuse
As AI becomes more prevalent in clinical settings, AI developers and health care providers must consider both the advantages of such technologies and the potential for increasing fraud, waste and abuse in health care delivery.
There is hope that AI can assist with accurate diagnoses and help minimize overtreatment by accurately predicting health conditions or identifying early-stage diseases, e.g., identifying precancerous legions.[11] For example, AI can identify patterns in large datasets that humans have yet to appreciate, which can help providers refocus their attention on variables to diagnose patients.
However, there is also the potential for AI to assign importance to variables with no clinical significance. If AI is programmed on faulty assumptions or flawed data, it may replicate flawed medical advice.[12] False positives or faulty programming of AI systems may lead to unnecessary treatments or procedures and increase waste of resources in health care.
AI presents similar challenges with respect to health care providers' billing and coding practices. On the one hand, payors and governments may use AI to help identify fraudulent billing, e.g., using ML to help spot upcoding or billing for services not rendered. On the other hand, ML algorithms may be programmed to suggest improper codes or billing practices to health care providers to generate more revenue.
The potential for algorithm-driven upcoding may increase the risk of overpayment obligations for providers, in addition to exposure under state and federal fraud and abuse laws. While AI offers potential for fraud detection, there is similarly a concomitant possibility of increasing fraudulent billing practices.
AI developers and providers will need to carefully evaluate how to mitigate risks of abuses that may invite regulatory scrutiny or compromise care. These steps can include updating internal policies and procedures and monitoring efforts account for this new AI world.
Malpractice and Liability
AI-assisted clinical decision making also presents legal challenges regarding malpractice and liability.
Traditional legal frameworks may need improvements to handle the intricacies and nuances of AI-assisted clinical decision making. Proponents of AI in clinical decision making hope that AI use will reduce medical errors and costly malpractice litigation. When errors arise, there are new questions regarding how to respond.
Patients, providers and policymakers should prepare to navigate the complex legal landscape surrounding malpractice and liability issues.
A primary legal issue is apportioning liability between health care providers and the developers of AI tools. As the use of AI in clinical decision making increases, it may become harder to establish responsibility for negligence or misconduct and to differentiate between improper oversight and errors embedded in AI tools and ML algorithms.
Some questions may be: Who is responsible for errors when AI is used in clinical decision making? Does the aggrieved party look to blame the provider, the AI developer, the vendor or some combination thereof?
Additionally, proprietary AI tools and ML algorithms may be difficult to scrutinize and may not offer insight into the decisions they make, adding to the difficulty in assessing blame and legal liability.
Emerging legislation and jurisprudence will likely define the future parameters of malpractice liability as the use of AI clinical decision making becomes the norm.
Conclusion
Further collaboration is necessary between state and federal regulatory bodies and interested stakeholders to evaluate and implement the regulatory proposals surrounding this topic.
The critical issue is balancing AI's benefits and innovations in health care while ensuring patient safety and provider accountability. AI developers and health care providers alike must monitor the regulatory landscape surrounding AI development and use and implement the appropriate controls to remain compliant with evolving regulations.
Moreover, health care providers must begin to consider implementing policies and procedures regarding AI in clinical decision making.
Providers should ensure proper checks and balances are in place to ensure the provider's professional judgment has the final say in clinical decision making. As the use of AI becomes the standard, so will the expectation of robust AI compliance programs, including policies and procedures and training personnel.
_________
[1] See How AI Is Improving Diagnostics, Decision-Making and Care, American Hospital Association (May 09, 2023), https://www.aha.org/aha-center-health-innovation-market-scan/2023-05-09-how-ai-improving-diagnostics-decision-making-and-care.
[2] See Nayak, Ashwin, et al, Comparison of History of Present Illness Summaries Generated by a Chatbot and Senior Internal Medicine Residents, JAMA Intern Med. 183.9 (2023): 1026–1027 https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2806981.
[3] See Fish, Christi, Nation's first dual degree in medicine and ai aims to prepare the next generation of health care providers (Sept. 14, 2023), https://www.utsa.edu/today/2023/09/story/UTSA-UT-Health-first-dual-degree-in-medicine-and-AI.html.
[4] See Focus Area: Artificial Intelligence, FDA (Sept. 06, 2022), https://www.fda.gov/science-research/focus-areas-regulatory-science-report/focus-area-artificial-intelligence.
[5] See Artificial Intelligence and Machine Learning (AI/ML) for Drug Development, FDA (May 16, 2023), FDA defines AI and ML as "a branch of computer science, statistics, and engineering that uses algorithms or models to perform tasks and exhibit behaviors such as learning, making decisions, and making predictions. ML is considered a subset of AI that allows models to be developed by training algorithms through analysis of data, without models being explicitly programmed."
[6] See Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices, FDA (Oct. 5, 2022), https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices.
[7] See Using Artificial Intelligence & Machine Learning in the Development of Drug and Biological Products, available at https://www.fda.gov/media/167973/download; Artificial Intelligence in Drug Manufacturing available at https://www.fda.gov/media/165743/download. See also 88 Fed. Reg. 66460 (Sept. 27, 223). FDA re-opened the comment period for the discussion paper Artificial Intelligence in Drug Manufacturing. Comments are due Nov. 27, 2023.
[8] The draft guidance defines digital health technology as "a system that uses computing platforms, connectivity, software, and/or sensors, for healthcare and related uses." See Digital Health Technologies for Remote Data Acquisition in Clinical Investigations, at 1 (December 2021).
[9] A "cyber device" is a device that (1) includes software validated, installed, or authorized by the sponsor as a device or in a device, (2) has the ability to connect to the internet, and (3) contains any such technological characteristics validated, installed, or authorized by the sponsor that could be vulnerable to the cybersecurity threats. 21 U.S.C. § 360n-2 (2023). The cyber device requirements apply to 510k, premarket approval applications (PMA), Product Development Protocol (PDP), De Novo applications and Humanitarian Device Exemptions (HDE), Investigational Device Exemptions (IDE), Biologics License Applications (BLA), and Investigational New Drug (IND) applications submitted after Mar. 29, 2023 and to any device changes the require FDA premarket review.
[10] Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions, available at https://www.fda.gov/media/119933/download.
[11] See Farina, Eduardo, et al. "An overview of artificial intelligence in oncology." Future Science OA 8.4 (2022): FSO787, and Shaffer, Kitt. "Can machine learning be used to generate a model to improve management of high-risk breast lesions?" Radiology 286.3 (2018): 819–821.
[12] See Aschwanden, Christine, Artificial Intelligence Makes Bad Medicine Even Worse, Wired (Jan. 10, 2020), https://www.wired.com/story/artificial-intelligence-makes-bad-medicine-even-worse/.