Considering The Future Of AI Regulation On Health Sector
In this article for Law360, lawyers Robert Kantrowitz, Ruan Meintjes and Kayla McCallum discuss the Texas Responsible AI Governance Act set to be considered in the state’s 2025 legislative session.
In response to the rapid advancements in artificial intelligence systems in recent years, and concerns about its potential misuse and other risks without appropriate guardrails, lawmakers at the federal and state levels have shifted their attention to establishing a regulatory framework to promote the responsible use of AI.
This is particularly true of the healthcare sector, where lawmakers have placed a focus on transparency, bias and discrimination, safety, and consumer and individual protections.
To date, the greatest movement in establishing statutory guardrails on the use and development of AI has come from the states. States like Utah, Colorado and California have led the way in establishing frameworks to regulate the use of AI in healthcare, as other states look to adopt similar approaches.
More specifically, in the 2025 legislative session, Texas is set to consider the Texas Responsible AI Governance Act. In its current form, while adopting some key aspects of other AI laws, TRAIGA includes features that would make it the latest comprehensive AI bill of its type.
Approaches at the Federal Level
As Congress has not passed a comprehensive AI legislative bill tackling the use of AI in healthcare, federal guidance has been limited to executive branch action, such as through presidential executive orders, and guidance and rulemaking from federal agencies like the U.S. Department of Health and Human Services.
For example, former President Joe Biden's Executive Order No. 14110 directed HHS, among other agencies, to establish initiatives focusing on the safety, privacy, and responsible development and deployment of AI in the healthcare sector.[1]
Agency action generally reflected this order, such as rules on transparency and risk management from the Office of the Assistant Secretary for Technology Policy and Office of the National Coordinator for Health Information Technology,[2] and the U.S. Food and Drug Administration's recent recommendations on information needed throughout a product's life cycle for regulation of safety and efficacy.[3]
HHS aimed to put a finer point on the executive order's goals, and issued its AI strategic plan focused on:
- Catalyzing health AI innovation and adoption;
- Promoting trustworthy AI development, and ethical and responsible use;
- Democratizing AI technologies and resources; and
- Cultivating AI-powered workforces and organization cultures.[4]
The Trump administration appears to be making a shift in AI policy, as the Biden order was rescinded on Jan. 20.
President Donald Trump directed "departments and agencies to rescind all policies, directives, regulations, orders, and other actions taken under the Biden AI order that are inconsistent with enhancing America's leadership in AI."[5]
Most recently, Vice President JD Vance, while attending the Artificial Intelligence Action Summit in Paris, remarked, "[w]e believe that excessive regulation of the AI sector could kill a transformative industry just as it's taking off."[6]
Though some action by HHS under Biden may remain in effect or see some enforcement, the current administration has shown a greater focus on innovation incentives and investment in AI as opposed to managing AI's risk. Thus, the federal oversight in healthcare remains in flux, and the regulatory situation is likely to remain fluid.[7]
As we await clear guidance from the new administration, healthcare companies that develop or deploy AI should look to state governments that have begun to fill this gap by introducing and, in some cases, enacting laws intended to regulate the development and deployment of AI, including in the healthcare industry.
State Action on AI in Healthcare
In 2024 alone, state lawmakers introduced nearly 700 AI-related bills.[8] States that have enacted AI laws that affect the healthcare space have taken varied approaches to tackling its regulation, including in its level of scrutiny of such systems.
Three states with recent, notable activity in this space are California, Utah and Colorado.
In 2024, California passed more than a dozen AI laws focused on consumer protection. Some of these laws were industry-specific, while others tackled general AI items like deepfakes.
In California, A.B. 3030[9] and S.B. 1120[10] received particular attention due to their impact on healthcare.
A.B. 3030 required healthcare entities that use AI to generate written or verbal patient communications pertaining to clinical information to include a disclaimer and clear instructions on how a patient can contact a provider or appropriate person.
S.B. 1120 established requirements for healthcare service plans or disability insurers that use AI for the purpose of utilization review or management.
Both bills became effective on Jan. 1.
In contrast to California's multibill approach, both Utah and Colorado have passed more comprehensive AI laws. The Utah Artificial Intelligence Policy Act, signed into law on March 13, 2024, requires anyone who "uses, prompts or otherwise causes generative [AI] to interact with a person," if asked, to clearly and conspicuously disclose to that person that the individual is interacting with AI as opposed to a human.[11]
More specifically, it tasked persons who provide services of a regulated occupation, such as physicians, to prominently disclose when a person is interacting with AI in the provision of such services.[12] Notably, the Utah AI Act includes an "Artificial Intelligence Learning Laboratory Program" that is intended to encourage AI innovation while protecting consumers.
After the passage of the Utah AI Act, Colorado passed the Consumer Protections in Interactions with Artificial Intelligence Systems Act in May 2024.
The Colorado AI Act, set to go into effect on Feb. 1, 2026, adopts a risk-based approach by imposing separate obligations on developers and deployers of "high-risk AI systems," defined as "any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision," which includes, among other enumerated items, healthcare services.[13]
The overarching goal of the law is to impose a duty of reasonable care on developers and deployers of AI to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.
As state laws regulating AI have gained momentum, and guidance from the federal government remains in flux, states without an AI framework have begun to look toward these states for guidance while adding provisions that they best believe will promote the responsible use of AI within their state. Texas is a recent example.
TRAIGA and the Near Future of AI Laws in the Healthcare Space
This past October, Texas Rep. Giovanni Capriglione, R-District 98, released a draft of H.B. 1709, also known as TRAIGA, which would "establish a comprehensive framework for the ethical development, deployment, and oversight of artificial intelligence (AI) technologies within Texas."[14]
Modeled after other state AI laws, albeit broader in some respects, TRAIGA, in its current form, imposes obligations on developers, distributors and deployers of high-risk AI systems, i.e., "artificial intelligence system[s] that, when deployed, make[] or [are] a contributing factor in making a consequential decision," which means "a decision that has a material legal, or similarly significant, effect on consumer's access to, cost of, or terms of … a health-care service."[15]
It also broadly bans specific AI systems that pose an unacceptable risk, including those that manipulate human behavior, engage in social scoring, capture biometric identifiers, infer or interpret sensitive personal attributes or emotions, utilize personal attributes for harm, or produce unlawful visual material or deepfake videos in violation of the Texas Penal Code.
Notably, except for small businesses, TRAIGA applies to any person who "(1) conducts business, promotes, or advertises in [Texas] or produces a product or service consumed by [Texas] residents or (2) engages in the development, distribution, or deployment of a high-risk intelligence system in [Texas]."[16]
In general, the bill imposes a duty of reasonable care on developers, distributors and deployers "to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination" arising from the use of such systems.
Specifically, developers are required to consistently evaluate and investigate their high-risk AI systems; properly report and inform applicable parties, e.g., of changes to AI models; take corrective action; keep detailed records; and in certain cases, cease operations of unlawful systems.
Distributors must take action to address noncompliance such as withdrawing unlawful AI systems and, if applicable, informing the developers and deployers of such action.
Deployers of AI systems must conduct impact assessments after changes in the AI system, and conduct annual reviews to assess algorithmic discrimination, assign human oversight for consequential decisions concerning high-risk AI, suspend any operation of noncompliant AI systems, and document noncompliant modifications or discrimination, as applicable.
TRAIGA features a sandbox program exception, intended to foster innovation and attract developers and deployers of AI. The program exempts from the bill's purview the development of an AI system used exclusively for research, training, testing or other predeployment activities.
Enforcement under the bill includes the ability of the attorney general to recover injunctive relief and civil penalties, subject to a 30-day cure period, and a private right of action related to any AI systems that present an unacceptable risk, as banned under the bill.
TRAIGA faces an uncertain fate in Texas. While the sandbox feature may help burnish the bill's innovative reputation, it is otherwise expansive. Legislators on both sides of the aisle have made AI and large technology companies frequent targets of attention. TRAIGA's passage may also be affected by federal activity.
With bills like TRAIGA as a backdrop, U.S. AI regulation could develop in several ways.
First, U.S. AI companies and models face increased competition from abroad, e.g., DeepSeek, which might encourage sufficient bipartisan consensus to design, pass and enact a base level of AI regulation to support the U.S. industry's competitiveness.
Even a basic level of comprehensive federal law and regulation may conflict with some of the restrictive provisions in laws like TRAIGA and the Colorado AI Act. Such conflict could lead to preemption concerns and relegate state lawmakers to more sector- or application-specific laws.
Second, in the absence of federal action, states are likely to proceed with enacting their own comprehensive laws over the next several years and offer competing models.
Finally, both the federal government and state legislatures can continue to retool existing legal frameworks to address AI.
At the federal level, given the fall of the Chevron doctrine, it may be difficult for the executive branch to expand AI regulations under the existing statutory framework, but Congress may be able to make incremental amendments to laws addressing data privacy such as the Health Insurance Portability and Accountability Act. At the state level, states can amend consumer protection and data privacy laws to address AI.
Until federal legislators reach material consensus, all the above options are likely to remain feasible, and TRAIGA remains a real option. The private sector will likely need to remain nimble and consider preemptively developing internal risk management frameworks.
Practical Takeaways
The regulatory future for healthcare AI will continue to evolve. If, as Vance signaled, the federal government prioritizes innovation over risk mitigation during the Trump administration's term, states are likely to propose a smorgasbord of laws to fill any voids at the federal level.
State action will likely lead to varying approaches, for example, states like California could lean deeper into AI safety with stricter guardrails, whereas states like Texas may develop frameworks that offer a higher degree of flexibility for developers and deployers.
And, to inject further complication and uncertainty, if the state solutions conflict too sharply with the federal government's initiatives regarding innovation, interesting cases may find their way to the courts, e.g., related to preemption.
Given the bevy of regulatory possibilities, developers and deployers in the healthcare AI space will require an agile compliance system. From a practical standpoint, that means (1) maintaining constant awareness of changing requirements, and (2) maintaining a feedback loop so that company management can understand where and how their healthcare AI products are being tested and used. Below are a few specific considerations:
- Track: Consider designating individuals within an organization to track AI laws, regulations, guidance and cues from peer businesses that may affect AI development or deployment in a meaningful way.
- Plan: Consider developing AI policies and action plans that specifically address AI risk and AI risk management. Where a developer or deployer is conducting business in jurisdictions with varying degrees of AI regulation, depending on organizational needs, consider developing a compliance program that targets compliance with the strictest jurisdiction or varies by applicable state or sector.
- Engage: Consider developing a plan to engage with various stakeholders, including legislators and regulators from various levels of government.
- Manage: Varying regulations and untested technology present material risk and, as such, consider accounting for such eventualities in business and compliance plans. Such management could be an active process via continuous monitoring for notable risks and developing remediation plans.
Robert Kantrowitz is a partner, and Ruan Meintjes and Kayla McCallum are associates, at Kirkland & Ellis LLP.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
[1] Exec. Order No. 14110, 88 FR 75191 (Oct. 30, 2023).
[2] Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, 89 Fed. Reg. 1192 (Jan. 9, 2024).
[3] U.S. Food & Drug Administration, FDA Issues Comprehensive Draft Guidance for Developers of Artificial Intelligence-Enabled Medical Devices (Jan. 2025), https://www.fda.gov/news-events/press-announcements/fda-issues-comprehensive-draft-guidance-developers-artificial-intelligence-enabled-medical-devices.
[4] U.S. Dept. of Health and Human Services, U.S. Department of Health and Human Services: Strategic Plan for the Use of Artificial Intelligence in Health, Human Services, and Public Health (Jan. 2025), https://www.healthit.gov/sites/default/files/202501/HHS%20AI%20Strategic%20Plan_Overview_FINAL_508.pdf.
[5] The White House, Fact Sheet: President Donald J. Trump Takes Action to Enhance America's AI Leadership (January 23, 2025), https://www.whitehouse.gov/fact-sheets/2025/01/fact-sheet-president-donald-j-trump-takes-action-to-enhance-americas-ai-leadership/.
[6] Olesya Dmitracova, Excessive regulation could "kill" AI industry, JD Vance tells government leaders at Paris summit (Feb. 11, 2025), https://www.cnn.com/2025/02/11/tech/jd-vance-ai-regulation-paris-intl/index.html.
[7] Exec. Order No. 14148, 90 FR 8237 (Jan. 28, 2025).
[8] Business Software Alliance, 2025 State AI Wave Building After 700 Bills in 2024 (Oct. 22, 2024), https://www.bsa.org/news-events/news/2025-state-ai-wave-building-after-700-bills-in-2024#:~:text=Key%202024%20Statistics%3A%20State%20lawmakers,through%20one%20chamber%20in%20statehouses.
[9] Cal. Health & Safety Code § 1339.75.
[10] CA LEGIS 879 (2024), 2024 Cal. Legis. Serv. Ch. 879 (S.B. 1120).
[11] Utah Code Ann. § 13-2-12(3).
[12] Id. at § 13-2-12(4)(a).
[13] Colo. Rev. Stat. Ann. §§ 6-1-1701(9)(a).
[14] Sarah Al-Shaikh, Texas lawmaker fills bill to regulate artificial intelligence, kxan (Dec. 26, 2024), https://www.kxan.com/news/local/austin/texas-lawmaker-files-bill-to-regulate-artificial-intelligence/; see also Texas Legislature Online, Bill: HB 1709 Text, https://capitol.texas.gov/BillLookup/Text.aspx?LegSess=89R&Bill=HB1709 (last visited February 6, 2025).
[15] H.B. 1709, 89 Reg. Sess. (Tex. 2024), https://capitol.texas.gov/BillLookup/History.aspx?LegSess=89R&Bill=HB1709.
[16] Id.