Cyber Insurance

How a $5 Million Cyber Claim Became a $500K Payout β€” And How to Make Sure It Doesn't Happen to You

April 28, 20267 min readAISafeIQ

Imagine you've been paying for a $5 million cyber insurance policy for three years. You've been diligent β€” you have the coverage, you've kept the premiums current, you've done the work. Then something goes wrong.

An employee's use of an AI tool exposes sensitive client data. Regulators get involved. You have notification costs, legal fees, a forensic investigation, and a client relationship that may not survive. You file the claim. Your broker helps you pull together the documentation. You're expecting your carrier to step in and absorb most of this.

The claim comes back: $500,000.

Not $5 million. Not half. Ten cents on the dollar.

This scenario is not hypothetical. It reflects a real and growing pattern in cyber insurance: the AI governance sublimit.


How Sublimits Work

A sublimit is a cap within a broader policy that limits coverage for a specific category of loss β€” even when the total policy limit is much higher.

You've seen sublimits before, even if you didn't call them that. Many cyber policies have sublimits for specific events: ransomware payments capped at a fraction of the total limit, social engineering fraud covered only up to a certain amount, business interruption claims subject to waiting periods or percentage-of-limit caps.

Sublimits are how carriers manage concentrated risk within a line of coverage. They're legal, enforceable, and buried in policy language that most policyholders never read until they have to.

AI governance sublimits are the new version of this pattern. As carriers have watched AI-related incidents appear in claims data, they've responded by adding policy language that restricts coverage for AI-related losses when the insured cannot demonstrate certain baseline governance controls. Specifically:

  • A documented AI use policy
  • Evidence of employee training on AI risks
  • Documentation of both

When an AI-related incident occurs and the carrier audits the claim file, if those three elements aren't on record, the claim may be subject to a sublimit β€” or, in some policy structures, a coverage exclusion.

The claim isn't denied. It's just capped at a fraction of what you were paying to receive.


Walking Through the Scenario

Here's how the mechanics play out:

Month 1: An employee in your operations team starts using an AI summarization tool to process meeting recordings. Nobody told them not to. There's no company policy on AI tools. It's a productivity habit they picked up on their own.

Month 4: Some of those recordings included client calls where confidential client data was discussed β€” pricing, strategic plans, financial information. The AI tool stores recordings and transcripts in its cloud infrastructure. That vendor is later involved in a data breach. Your client data is in the exposed dataset.

Month 5: You become aware of the breach through your vendor's notification. Your attorney tells you you have notification obligations in multiple jurisdictions. You also have contractual exposure with the affected clients β€” your agreements almost certainly required you to protect their data, and this may constitute a violation.

Month 6: The forensic investigation concludes. Notification letters go out. Two clients terminate their agreements. You file a claim against your $5M cyber policy.

Month 7: The carrier's claims team reviews the file. They ask for your AI governance documentation: AI use policy, employee training records, anything that demonstrates you had controls in place.

You have nothing. No policy. No training records. No documented controls of any kind.

The carrier applies the AI governance sublimit written into your policy's terms. Your $5M policy pays $500,000. Your out-of-pocket exposure for the remaining loss is yours.


Why This Is Getting More Common, Not Less

This pattern is accelerating for a straightforward reason: carriers are responding to what they're seeing in their own data.

AI-related incidents are appearing in claims. They share certain characteristics β€” unsanctioned employee tool use, no documented policies, no training that would have made employees aware of the risk. Carriers have seen enough of these to understand the pattern and respond to it the way they always respond to concentrated risk: by pricing it, limiting it, or excluding it.

According to Aon's 2026 survey, more than 90% of insurers now consider AI-related incidents a material risk in their cyber portfolios. The policy language responding to that risk is already in the market β€” and renewal applications are increasingly including explicit questions about AI governance controls.

The businesses that answer "yes" to those questions β€” with documentation to back it up β€” get different underwriting treatment than the businesses that answer "no" or "we're working on it."


What Would Have Prevented It

The scenario above had a clear prevention path. It didn't require an AI compliance consultant, a six-month project, or a significant budget.

Step 1: Establish a documented AI use policy. This tells employees which tools they can use, what data is off-limits, and what their responsibilities are. In our scenario, a policy that prohibited uploading client meeting recordings to external AI tools β€” with an explanation of why β€” would have stopped the exposure before it started.

Step 2: Train employees on the policy. Not a company-wide email. Actual training that walks employees through the real behaviors that create risk, with specific examples and organizational rules they can apply. In our scenario, an employee who understood that client conversations contained confidential data that couldn't be shared with third-party AI tools would have made a different choice.

Step 3: Document both. Completion certificates showing that each employee completed the training. A copy of the policy. An organized evidence package that you can hand your carrier if they ask.

Those three steps β€” policy, training, documentation β€” are what carriers are looking for. When they're in place and on record, claims adjusters have something to work with. When they're not, the sublimit language applies.


The Other Thing It Prevents

It's worth noting that sublimits are the financial consequence. The operational and reputational consequences often dwarf the dollar loss.

In our scenario, two clients terminated their agreements. That's revenue, likely recurring, that disappears permanently. The relationship cost β€” the trust that took years to build and was gone in a quarter β€” doesn't show up in the insurance math, but it's real.

Documented AI governance doesn't just change how your carrier treats a claim. It changes your exposure in the first place. Employees who have been trained don't make the same choices as employees who haven't. Policies that are on paper get followed in ways that informal norms don't. The incidents that generate claims don't happen as often when the underlying behaviors are managed.

This is the same logic behind security awareness training. The ROI isn't just better claim outcomes β€” it's fewer incidents.


What to Do Before Your Next Renewal

If you have a cyber insurance renewal in the next 12 months, here is the practical sequence:

  1. Find out what your policy says. Pull your current policy documents and look for any language referencing AI, emerging technology, or governance sublimits. If you can't find it, ask your broker to review the exclusions and sublimits section specifically for AI-related carve-outs. Know what you have.
  1. Get your policy in place. If you don't have a documented AI use policy, create one before renewal. Your carrier may ask for it at application. At minimum, having it on file puts you in a different category than businesses that have nothing.
  1. Train your employees. A policy without training is a policy that hasn't been communicated. Carriers want to see both because a policy alone doesn't change behavior β€” training does.
  1. Organize your documentation. Your policy document, your training completion records, and your employee acknowledgments should be in one place, ready to produce. Don't wait until a claim to figure out where everything is.

Start with the free AI Use Policy template β†’

Or if you need the full package β€” policy, training, completion certificates, and an Insurance Proof Pack designed specifically for this conversation with your carrier β€” AISafeIQ covers all of it β†’

The $5 million scenario is preventable. The prevention costs less than one month of your premium.


AISafeIQ provides AI use policies, employee training, completion certificates, and Insurance Proof Packs for businesses that need to demonstrate AI governance to cyber insurance carriers.

Note: Specific policy terms, sublimit structures, and claims outcomes vary by carrier and policy. Consult your broker for guidance on how AI governance provisions apply to your specific coverage.

Ready to get covered?

Get the Insurance Proof Pack

AI Use Policy + Employee Training + Completion Certificates + Insurance Proof Pack. Everything you need in under 10 minutes.

← Back to Resources