Artificial intelligence is racing into the insurance world faster than most policymakers can catch their breath. Claims departments are experimenting with tools that can read adjuster notes, predict settlement ranges, and even recommend denials. Without disclosure to their policyholders, some insurers are already deploying systems that act like silent adjusters, quietly shaping outcomes.

This is not science fiction. It is happening now, and we have already seen the harm when claims decisions are influenced by machines trained to maximize profits without meaningful human judgment. Florida is now stepping into that arena with House Bill 527, filed by Representative Hillary Cassel, a legislator who has consistently shown that she understands the importance of fairness for policyholders and the need for accountability in claims practices.

Cassel’s bill strikes at the heart of the problem. It does not ban insurers from using artificial intelligence, it simply demands something that should be obvious in a system as consequential as insurance claims. It requires that any decision to deny a claim, in whole or in part, be made by a qualified human being. AI can whisper in the adjuster’s ear, but the machine cannot push the “no” button on its own.

Cassel’s bill reinforces a basic principle that has eroded as technology advances. The insurance contract is a promise made between people, and the judgment required to interpret that promise cannot be delegated entirely to a predictive model.

Her legislation requires adjusters to independently verify the facts, review the accuracy of the AI output, and confirm that the policy truly does not provide coverage. It forces insurers to document who made the decision, when it was made, and why. This is not bureaucracy; it is accountability. Policyholders deserve to know that when an insurer tells them their claim is denied, the decision was the product of a thoughtful review rather than a rubber-stamped prediction generated by a vendor’s proprietary software.

Other states are nibbling at the edges of the issue. Many have adopted the NAIC’s model bulletin on artificial intelligence, which calls for transparency, explainability, and strong oversight over automated systems used in underwriting and claims. That bulletin is important, and regulators around the country are taking it seriously. But bulletins are not statutes. They do not give policyholders the same clarity or the same enforcement power that a well-written law provides.

So far, my research has found no other state that has taken the step Florida is considering. Making it unequivocally illegal for a claim to be denied or underpaid solely because an algorithm or artificial intelligence recommended it is novel. Florida may be the first state to draw a bright line around one of the most dangerous uses of AI in the insurance industry. That is something worth paying attention to.

There is a broader lesson here for anyone watching the future of claims handling. Technology will keep improving, and insurers will keep looking for ways to use it to cut costs and make faster decisions. Some of that can genuinely benefit policyholders. But when the machines become substitutes for critical thinking and empathy, the claims process breaks down.

We already know what happens when algorithms quietly shape denial patterns in other sectors. Medicare Advantage faced national backlash for allowing predictive AI tools to override the judgment of treating physicians. There is no reason to think property insurance would be immune from similar abuses if left unchecked. Hillary Cassell’s bill is a reminder that we still have the power to put reasonable guardrails in place before the damage becomes systemic.

Those of us who fight for policyholders should welcome this type of legislation. This bill tells powerful insurers there are limits to how far they can outsource the human element. Good faith in property insurance claims handling requires more than efficiency. It requires someone willing to stand behind the decision, explain it, and own it. In a world where machines are becoming more capable by the day, that principle is worth defending.

For those interested the topic of artificial intelligence in claims handling, I suggest reading When Artificial Intelligence Becomes Wrongful Intelligence in Claims Handling and Artificial Intelligence, Insurance, and Accountability.

Thought For The Day

“Technology is a useful servant but a dangerous master.”
– Christian Lous Lange