The Expanding Role of AI in the U.S. Insurance Industry: Operations, Risks, and Regulatory Requirements Explained

The Expanding Role of AI in the U.S. Insurance Industry

AI in the U.S. Insurance Industry

This whitepaper takes a look at the integration of Artificial Intelligence (AI) within the U.S. insurance provider landscape. It frames AI not as a transient trend, but as a fundamental shift in the industry’s administrative and operational DNA.

1. The History of Insurance Administration (ADM)

In the insurance context, Administration (ADM) refers to the “back-office” infrastructure—managing policies, collecting payments (Premiums), and record-keeping.

The Manual Era (1750s – 1960s)

  • For over 200 years, insurance was a physical business. Actuaries (math experts who calculate risk) used handwritten ledgers and “Life Tables.” Data was “trapped” on paper, making large-scale analysis impossible.

The Digital & Relational Era (1970s – 2010s)

  • Computers and relational databases replaced paper. The industry adopted EDI (Electronic Data Interchange) to send files digitally. However, systems were “siloed”—the billing computer rarely talked to the claims computer in real-time, and rules were rigid, human-coded “if-then” logic.

The Intelligent Era (2020 – Present)

  • Administration has shifted to Application Development and Maintenance using software that “learns.” AI now “reads” incoming emails and extracts data from ACORD Forms (industry-standard templates) automatically. The goal is “Predictive Administration”—identifying risks or customer needs before they manifest.

2. How AI is Redefining the Industry

AI transforms the three main stages of the insurance lifecycle from slow, manual tasks into fast, data-driven processes.

Stage A: Underwriting (The “Deciding” Stage)

Underwriting is the process of evaluating an applicant’s risk to decide if they should be insured and at what price (Premium).

The Shift

  1. The Old Way: Underwriters relied on static data, such as your age, zip code, and credit score. It was a “snapshot” in time Moving from static annual snapshots to Continuous Underwriting.
  2. The AI Way: AI enables Continuous Underwriting. It looks at “living data” to get a more accurate picture.

Real-World: Carriers now use Telematics (GPS and sensor data). Instead of a flat annual rate, AI analyzes real-time braking habits and speed for commercial fleets, adjusting premiums monthly based on actual safety scores.

Real-World Example: Commercial Fleet Insurance. 

  • In 2024, companies like Liberty Mutual1 and Zego2 began using AI-powered Telematics. Instead of charging a trucking company a flat annual rate, AI analyzes real-time data from the trucks’ sensors (speed, braking habits, and hours driven).
  • If the AI sees that drivers are following safety protocols, it can automatically lower the premium for the following month. This rewards safe behavior and reduces the insurer’s risk.

Stage B: Claims (The “Payout” Stage)

A Claim is a formal request by a policyholder for the insurance company to pay for a loss.

The Shift

  1. The Old Way: Moving from manual inspections to Touchless Claims via Computer Vision (AI that “sees”). After a car accident, you would wait days for a human “Adjuster” to visit, look at the car, and write a manual estimate.
  2. The AI Way: Computer Vision allows for “Touchless Claims.”

Real-World: Leading carriers now allow users to upload photos of minor damage via a mobile app. The AI compares photos against millions of past claims to estimate repair costs and issue digital payments in under 10 minutes.

Real-World Example: Auto Glass or Minor Fender-Benders. 

  • Leading carriers like Progressive3 and Aviva4 now use AI tools that allow a customer to upload three photos of a cracked windshield or a dented bumper via a mobile app.
  • The AI compares these photos against a database of millions of similar accidents to estimate the repair cost instantly. In many cases, the AI approves the claim and sends a digital payment to the customer’s bank account in under 10 minutes.

Stage C: Fraud Management (The “Gatekeeper” Stage)

Fraud Management is the process of identifying and stopping dishonest claims (like faking an accident to get money).

The Shift: From reactive “spot checks” to Predictive Anomaly Detection.

  1. The Old Way: Special Investigation Units (SIUs) did “spot checks” or relied on tips. Much of the fraud went unnoticed because it was too small or too complex for humans to catch.
  2. The AI Way: Predictive Anomaly Detection. AI scans 100% of claims—not just a sample—to find patterns that humans can’t see.

Real-World: Insurers use Forensic AI to detect “Digital Forgery.” As fraudsters use Generative AI to create fake crash photos, insurers use AI to analyze Metadata (hidden file DNA) to detect if pixels were rearranged or if shadows don’t match regional weather data.

Real-World Example: As of 2025, fraudsters are using “Generative AI” to create fake photos of house fires or car wrecks that never happened.

  • To fight this, insurers use Forensic AI. When a photo is submitted, the AI doesn’t just look at the image; it looks at the Metadata (the hidden digital “DNA” of the file). It can detect if the lighting on a “dent” doesn’t match the position of the sun at that time of day, or if the pixels have been “re-arranged” by an AI tool.
  • Companies like Shift Technology have helped insurers reduce fraudulent payouts by up to 30% using these “AI vs. AI” techniques.

3. Risks & Mitigation: The 4/5 Rule

The 4/5 Rule (The 80% Rule): Regulators use the 4/5 Rule as a litmus test for fairness in claims and underwriting.

The Math

  • If the group with the highest “success rate” (e.g., claim approvals) is 10%, every other group must have a success rate of at least 8% (which is 4/5 of 10).

The Application

  • If a fraud-detection tool flags 20% of Group A but only 10% of Group B for investigation, the ratio is 0.5. Since 0.5 is less than 0.8 (4/5), the AI is flagged for suspected bias.

The “Flag” Example: Imagine an AI fraud-detection tool. If it clears 100% of claims from Group A but only 70% of claims from Group B for “fast-track payment,” the ratio is 0.7. Since 0.7 is less than 0.8 (4/5), the AI is considered to have a “suspected bias.”

  • The company must then prove the AI isn’t discriminating based on race or gender, but rather on a valid business reason.

Expanded Risks & Mitigations

Key RiskDescriptionMitigation Strategy
Algorithmic BiasAI may use “proxies” (like zip codes) that correlate with protected classes, causing discrimination.Equity Audits: Regular quantitative testing of models using “Hold-out Datasets” to check for disparate impact.
Model DriftA model trained on 2019 data may fail to accurately assess 2026 inflation or health trends.Continuous Monitoring: Real-time dashboards that alert engineers when model accuracy deviates from a set baseline.
The Black Box EffectComplex AI (Deep Learning) can be hard to explain to a regulator or a denied customer.Explainable AI (XAI): Utilizing “Local Interpretable Model-agnostic Explanations” (LIME) to provide specific reasons for individual outcomes.
Digital ForgeryCriminals using AI to create “perfect” fake evidence (photos/medical bills).Forensic Metadata Analysis: Deploying secondary AI systems specifically designed to find digital tampering in submitted claims.

4. AI Governance: The AIS Program

Governance is the set of “safety rails” an insurance company puts around its technology. In the U.S., the NAIC (National Association of Insurance Commissioners)—the group that sets the standards for state regulators—requires insurers to maintain a formal, written AIS Program (Artificial Intelligence Systems Program).

Think of the AIS Program as the Operations Manual that proves to the government that the company’s AI is safe, fair, and legal.

Core AIS Program Requirements: To satisfy regulators, an insurance company’s AIS Program should include these high-level pillars below:

Governance & Oversight

  • Senior Accountability: A requirement that the Board of Directors or a specific executive committee “owns” the AI strategy.
  • Defined Roles: Clear documentation of who is responsible for a model from the day it is built to the day it is retired.
  • Human-in-the-Loop (HITL): A mandatory rule stating that high-stakes decisions (e.g., coverage denials or premium hikes) must be reviewed by a licensed human professional.

Risk Management & Controls

  • Risk Triage: A process to rank AI tools by “danger.” An AI that suggests marketing colors is low risk; an AI that decides to deny a surgery claim is high risk.
  • Bias & Fairness Testing: Standardized rules for running the 4/5 Rule (the 80% test) to ensure no protected group is being unfairly treated.
  • The “Kill Switch”: A protocol for how the company will immediately shut down an AI if it begins producing “drifted” or incorrect results.

Data & Model Lifecycle Management

  • Data Lineage: Tracking where the training data came from (e.g., verifying it respects HIPAA and CCPA privacy rules).
  • Continuous Monitoring: A schedule for regular “health checks” to ensure the AI’s accuracy hasn’t dropped over time.

Third-Party & Vendor Oversight

  • Vendor Audits: If the insurer buys an AI tool from a third party, the insurer is still legally liable. The program must include a plan to audit the vendor’s “secret sauce” for bias.
  • Contractual Responsibility: Legal language ensuring the vendor provides enough information so the insurer can explain an AI decision to a customer or a regulator.

Documentation & Auditability:

  • The Model Inventory: A “master list” of every AI system in use across Underwriting, Claims, and Fraud Management.
  • Traceability: A digital “paper trail” showing every change made to the AI, who authorized it, and why.
  • Data Integrity: Protocols ensuring that data used for training is accurate, complete, and legally obtained.

5. Privacy Compliance: Federal & State Landscapes

Privacy laws dictate how insurers handle the “fuel” for AI: personal data.

Privacy is the “permission slip” for AI. In the U.S., insurers must navigate a dual-layer system where federal laws set a baseline, but state laws create a complex, often confusing, patchwork of rules and exemptions.

Privacy in insurance is a layered environment. While many people think of “General Data Privacy,” insurance companies operate under specific federal carve-outs and state-level “Model Laws.”

Federal Privacy Frameworks (The Floor)

Two primary federal laws govern the majority of data used in insurance AI models. These laws are considered the “floor” of privacy protection.

GLBA (Gramm-Leach-Bliley Act):

What it is: A 1999 law requiring financial institutions (including insurers) to explain how they share consumer data and to protect sensitive information.

  • Scope: Covers NPI (Non-public Personal Information)—financial data like credit scores, payment history, and Social Security numbers.
  • AI Requirement: AI models using financial data must ensure that “Opt-Out” notices have been provided to customers before their data is shared with non-affiliated third-party AI vendors.

HIPAA (Health Insurance Portability and Accountability Act):

What it is: A law protecting PHI (Protected Health Information).

  • Scope: Covers PHI (Protected Health Information)— Primarily affects health and life insurers. It dictates that AI cannot “see” your medical data unless the environment is strictly secured and the use is for “Treatment, Payment, or Operations.”
  • AI Requirement: For health or life insurers, AI can only process PHI if the vendor is a “Business Associate” who has signed a contract (BAA) promising to protect that data as strictly as the insurer does.

State Comprehensive Privacy Laws & Exemptions

Since 2020, states like California (CCPA/CPRA), Virginia (VCDPA), and Colorado (CPA) have passed sweeping new privacy laws. However, most include specific “carve-outs” for the insurance industry because insurers are already heavily regulated by the state insurance departments.

The Entity-Level Exemption (The “Clean Break”):

States: Virginia, Utah, Tennessee, and Iowa.

The Rule: If your company is already regulated by the GLBA (which most insurers are), the entire company is exempt from the state’s new comprehensive privacy law. This is the most favorable environment for insurers.

The Data-Level Exemption (The “Patchwork”):

States: California (CCPA), Colorado, and Connecticut.

The Rule: The company is not exempt, but the specific data governed by GLBA or HIPAA is. However, “other” data—like website browsing habits or marketing profiles—remains subject to regulations.

The AI Risk: This creates a “split” in compliance. If your AI uses a customer’s credit score (GLBA data), it’s exempt from CCPA. But if that same AI uses the customer’s “website browsing history” or “marketing preferences,” that data is not exempt. The insurer must follow CCPA rules for that specific part of the AI’s “brain.”

The NAIC Insurance Data Security Model Law (#668)

Instead of following general privacy laws, many insurers look to the NAIC Model Law.

The Requirement: It requires a written “Information Security Program” specifically for insurance.

The AI Intersection: This law requires insurers to monitor Third-Party Service Providers. If an AI vendor has a data breach, the insurer—not the vendor—is responsible for notifying the State Insurance Commissioner within 72 hours.

Why Compliance Figures into AI

  • Data Minimization: Privacy laws require companies to only collect the data they need. If an AI model for home insurance asks for your heart rate data from a smartwatch, it might violate privacy laws because that data isn’t “reasonably necessary” for home insurance.
  • The Right to Explanation: Under many state laws (like in Colorado), if an AI makes a decision about you, you have the right to ask “Why?” If the company’s AI is a “Black Box” (meaning even the company doesn’t know how it works), they are in violation of these privacy rights.
  • Third-Party Sharing: AI often requires “Cloud Computing” (sending data to another company’s servers to be processed). Privacy laws require strict Business Associate Agreements (BAAs) or contracts to ensure the AI vendor doesn’t keep or sell that data.
LawData Type CoveredInsurance Exemption TypeImpact on AI Deployment
CCPA 
(CA)
General
Personal
Info
Data-Level: Exempts PHI
and NPI but covers website/marketing data.
Must “tag” data to know what is exempt (NPI) vs. what is not.
VCDPA 
(VA)
General
Personal
Info
Entity-Level: Exempts any business regulated
by GLBA.
Total exemption for GLBA-regulated insurers; simpler AI scaling.
HIPAA 
(Fed)
Health Data
(PHI)
Exempts certain “Employment Records”
held by the insurer.
AI must be “HIPAA-Compliant” (No data sharing without BAA).
GLBA 
(Fed)
Financial
Data (NPI)
No exemption: this is the primary rule for most insurers. AI must allow for customer “Opt-Out” of data sharing.
NAIC
Model Law
Insurance-specificNot an exemption, but a “Safe Harbor”—if you follow this, you usually satisfy state regulators. AI deployment must include vendor risk management protocols.

Need more guidance on navigating AI in the U.S. insurance industry? Our experts at Myna are here to help! Contact us for clear, practical direction on AI strategy, risk management, and compliance readiness for better alignment to this long-lasting operational shift.

Contact Us

Name(Required)
Please let us know what's on your mind. Have a question for us? Ask away.
Consent
Myna Partners is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. By clicking submit below, you consent to allow Myna Partners to store and process the personal information submitted above to provide you the content requested. You may unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy. By clicking submit below, you consent to allow levelupconsult.com to store and process the personal information submitted above to provide you the content requested.

References

  1. https://business.libertymutual.com/insights/4-ways-telematics-can-drive-safety-for-construction-businesses/
  2. https://www.the-digital-insurer.com/wp-content/uploads/securepdfs/2024/09/Zego-InsurTech-Analysis-VM.pdf
  3. https://www.progressive.com/claims/faq/guided-photo/
  4. https://www.snapsheetclaims.com/post/mobile-app-resolves-avivas-auto-claims-20-40-faster-in-trial-tests