For claims professionals, 2026 marks a pivotal year for the use of artificial intelligence (AI). Claims administration has shifted from an efficiency tool to a highly scrutinized regulatory topic. The legislative arena is a complex blend of uniform guidance from the National Association of Insurance Commissioners (NAIC), prescriptive state laws and encouraging federal research initiatives. The core principle governing adoption of AI into claims administration is that AI must act as a support tool, and not the sole decision-maker.
The NAIC: Principles-Based Uniformity
The NAIC has been instrumental in guiding a uniform approach to AI regulation. Adopted by over 24 states, the NAIC’s December 2023 Model Bulletin on the Use of Artificial Intelligence Systems by Insurers for carriers. It requires AI decisions to align with existing state insurance laws against unfair trade practices and discrimination.
Key requirements from the bulletin have become best practices for claims departments and include:
- A Written Artificial Intelligence System (AIS) Program: Insurers must have a documented program for responsible AI use that includes governance, risk management, and internal audit functions.
- Data Quality and Bias Testing: The program should incorporate rigorous verification and testing methods to identify errors, bias, and potential unfair discrimination in AI models.
- Vendor Oversight: Insurers have full accountability for AI acquired third-party tools, ensuring due diligence, and inclusive of audit rights. This principles-based approach provides a common regulatory baseline, helping to mitigate the challenges that arise from nonuniform state requirements.
State-Specific Mandates: “Human-in-the-Loop” Laws
While the NAIC sets the general expectations, some specific state governments now attempt to legally mandate human intervention for high-impact decisions:
- Florida's HB 527: This bill would be highly relevant for workers' compensation carriers, insurers, and Health Maintenance Organizations (HMO). It explicitly prohibits using an algorithm or AI system as the sole basis for denying or reducing a claim payment. A human professional must independently analyze facts and certify that AI was not the lone decision-maker. The bill was unanimously approved by the House Insurance & Banking Subcommittee on December 9, 2025.
- Arizona's HB 2175: Effective July 1, 2026, specific to health insurance, this law requires a licensed medical director to personally review and sign any health insurance denial, thus preventing sole reliance on AI for medical necessity determinations. This measure aims to keep health care decisions in the hands of professionals accountable to state medical standards.
- California’s AB 3030 and AB 489: Effective January 1, 2025 and January 1, 2026 respectively, these two laws create a framework for transparency and truth in licensing when AI interacts with patients. AB 3030 (“The Transparency Bill” requires health care providers to include a clear disclaimer whenever AI generates clinical information for a patient. AB 489 (the “Title Protection Bill” makes it illegal for AI to deceptively pose as a licensed professional. Together, these laws aim to protect the “truth in licensing” that patients rely on when making critical decisions.
- Colorado’s SB 24-205: Effective June 30, 2026, Colorado is stepping up its oversight of AI in health care with a new law focused on preventing “algorithmic discrimination.” This isn’t just about ethics anymore; the bill moves the industry from self-regulation to a mandatory oversight model. It specifically targets high-risk AI—the kind of systems that have a major impact on one’s life, like those used to determine health insurance eligibility, medical diagnoses, or treatment plans. By requiring developers and users to manage these risks proactively, Colorado aims to ensure that AI remains a tool for progress rather than a source of hidden bias.
- Illinois’ HB 1806: Effective August 1, 2025, this law is already in effect and draws a hard line: AI cannot act as a therapist. In Illinois, only human professionals are allowed to provide or advertise mental health therapy. While therapists can use AI to help with behind the scenes work like transcribing sessions, they must first get written, revocable consent from the patient. Essentially, the law ensures that while AI can help with paperwork, a human must always be the one providing the care.
- Texas’ HB 149 and SB 1188: Effective September 1, 2025, health care practitioners must disclose AI use for diagnostic purposes and personally review all AI generated recommendations (SB 1188). Broad consumer disclosure requirements for AI patient interactions under HB 149 follow on January 1, 2026. Together, these laws ensure AI acts as a support tool rather than a replacement for human judgment. Additionally, SB 1188 mandates that by January 1, 2026, all electronic health records must be physically maintained within the United States to protect patient privacy.
Federal Focus: Encouraging Innovation
At the federal level, the approach remains supportive of innovation and has been less about direct regulation of claims handling and more about research and development. The "Healthcare Enhancement And Learning Through Harnessing Artificial Intelligence Act," or HEALTH AI Act (H.R. 5045), introduced in August 2025, funds research into using generative AI to streamline claims and reduce administrative burdens, balancing the restrictive nature of state-level consumer protections.
How Does This Help Create Successful AI Navigation?
In 2026, the AI regulatory compliance will involve balancing the NAIC’s uniform principles with legally enforceable state-mandated “human-in-the-loop” requirements. Processes will need to be efficient and comply with converging federal and state mandates.
The Enlyte Regulatory Compliance and Governmental Affairs Team actively monitors AI trends to ensure our clients have the necessary information to remain compliant. To learn more about what the Enlyte government affairs team is working on, and stay up to date on this and other regulatory issues, sign up to receive our monthly Compliance Connection Newsletter.