In January 2026, Verisk released three new endorsements (CG 40 47, CG 40 48, CG 35 08) that let insurers exclude AI-related claims from commercial general liability policies. Nine insurance groups have already filed to adopt them. WR Berkley went further with an "absolute AI exclusion" covering directors and officers, errors and omissions, and fiduciary liability.
Here's what that means for you: if your business uses AI tools and something goes wrong, your insurance may not cover it. Not because of a technicality. Because your insurer explicitly carved out AI from your policy.
The fix isn't complicated. You need a documented AI governance policy. This post gives you one.
Your Team Is Already Using AI
Before you decide whether your business needs an AI policy, consider this: the decision to use AI has already been made. Your team made it for you.
A WalkMe survey of 1,000 AI-using workers found that 78% use AI tools their employer hasn't approved. They're drafting emails in ChatGPT, summarizing meetings with Otter.ai, generating reports with Claude. They're doing it because it saves them time, and they're not telling you because they're not sure if they're allowed to.
It gets worse. A Software AG study of 6,000 knowledge workers found that 46% would keep using AI tools even if their employer banned them outright. Bans don't work. Samsung, JPMorgan, and Apple all tried. All three reversed course and built approved alternatives instead.
And this isn't just a junior employee problem. The same WalkMe research found that 53.4% of C-suite leaders conceal their own AI habits. The people running the company are hiding their usage, too.
This is what researchers call "AI shame." People use it, they lie about it, and in the process, your customer data, financial records, and confidential files end up in tools you've never evaluated.
A policy isn't about control. It's about making the thing that's already happening safe.
What Happens Without One
Two real cases. Both involve businesses the size of yours.
The chatbot wiretap explosion
You add a chatbot to your website. The vendor's AI processes customer conversations. A plaintiff's attorney argues that recording those conversations without consent violates California's wiretap law. You get sued.
This isn't hypothetical. According to Baker Botts' analysis of 284 AI litigation matters, chatbot wiretap lawsuits grew from 2 in 2021 to 30 in 2025. Defendants include dental practices, insurance agencies, and restaurant franchises. In Taylor v. ConverseNow (3:25-cv-00990), a Domino's franchise got sued because its AI ordering system recorded customer calls. Damages under California's wiretap statute: $5,000 per violation, no cap.
In Brewer v. Otter.ai (5:25-cv-06911, N.D. Cal.), an employee invited an AI notetaker to a client call. The tool recorded everyone without consent and used the transcripts for model training. Your employee did it. Your business is liable.
The pattern is clear. Businesses, not AI vendors, are the ones getting sued. If you don't have a policy governing which AI tools your team can use and how, you're one employee decision away from a lawsuit.
The 5-Section Template
Every enterprise AI framework we reviewed (NIST AI RMF, ISO 42001, EU AI Act) runs 50+ pages. That's overkill for a 15-person business. We distilled it down to five sections that cover what actually matters.
Section 1: Approved Tools
The single most effective way to prevent shadow AI is to give your team approved alternatives. Create a three-tier system:
| Tier | Status | What It Means | Examples |
|---|---|---|---|
| Approved | Free to use for work tasks | Vetted, contracts signed, data handling reviewed | ChatGPT Team, Claude Pro, Grammarly Business |
| Restricted | Allowed with limits | No client data, no confidential info, personal accounts only | Free-tier ChatGPT, Google Gemini, Perplexity |
| Prohibited | Do not use | Failed security review, trains on inputs, no data controls | Unvetted browser extensions, random "AI" apps, tools with no privacy policy |
Your team will find AI tools whether you provide them or not. Every major company that tried a ban (Samsung, JPMorgan, Apple) reversed course within months. Give people approved options and they'll use them.
Section 2: Data Rules
Vague rules like "don't put sensitive data in AI tools" force bad judgment calls. Be specific. Use a classification matrix:
| Data Type | AI Approved? | Example |
|---|---|---|
| Public marketing copy | Yes | Blog drafts, social media posts, website text |
| Internal operations | Yes, Approved tools only | Meeting notes (your own), project summaries, process docs |
| Client names + project details (together) | No | "Draft an email to John Smith about his pending lawsuit" |
| Financial records, SSNs, account numbers | Never | Tax returns, payroll data, bank details |
| Health information (ePHI) | Never (without BAA) | Patient records, insurance claims, treatment notes |
The key word is "together." A client's first name alone isn't sensitive. Their name combined with their case details, medical records, or financial information is. Teach your team to recognize the difference.
Want the full editable template with all five sections, industry add-ons, and a vendor evaluation checklist? Download the AI Policy Template → (free, no email required)
Section 3: Employee Guidelines
One rule matters more than all the others: every AI-generated deliverable gets reviewed by a human before it leaves the building.
That means before it goes to a client, gets posted publicly, or gets filed anywhere official. AI tools hallucinate. They invent case citations (Mata v. Avianca). They fabricate statistics. They confidently produce wrong answers.
Your guidelines should also cover:
- Disclosure: When do you tell clients or customers that AI was involved? (Multiple states now require it.)
- Attribution: AI-assisted work is still your work. Your name on it means you verified it.
- Training: Every employee who uses AI tools completes a 30-minute onboarding. No exceptions.
Section 4: Vendor Checklist
Before approving any AI tool, ask these five questions:
| Question | Why It Matters |
|---|---|
| Does the vendor train models on our data? | If yes, your client info becomes part of their product |
| Where is our data stored? What country? | Determines which privacy laws apply |
| Does the vendor have SOC 2 Type II certification? | Baseline proof they take security seriously |
| Will they sign a DPA (or BAA for healthcare)? | Legal requirement for regulated data |
| Can we delete all our data when we leave? | If not, your data lives there forever |
If a vendor can't answer these clearly, that's your answer. Move on.
Section 5: Incident Response
When something goes wrong (data entered into the wrong tool, AI output sent to a client without review, a vendor breach), you need a phone tree, not a flowchart.
Keep it simple:
- Who to call: One person. Name, phone number, email. Not "the IT department."
- What to report: What tool, what data, what happened.
- Timeline: Report within 24 hours. No exceptions.
- The rule: No consequences for reporting. Consequences for not reporting.
The goal is speed. The faster you know about an incident, the faster you can contain it. If people are afraid of getting punished, they'll hide mistakes. And hidden mistakes become lawsuits.
Customize It for Your Industry
The five-section template works for any small business. But three industries need extra attention.
Law firms
ABA Formal Opinion 512 (July 2024) changed the rules. Lawyers must now demonstrate competence with AI tools under Rule 1.1. You need specific client consent for AI use (boilerplate engagement letter language is not adequate). Every AI-generated citation must be independently verified. And more than half of solo and small firms still have no AI policy. The template includes a legal ethics add-on for Section 3.
Healthcare
If an AI tool touches electronic protected health information (ePHI), HIPAA applies. Full stop. You need a Business Associate Agreement from every AI vendor that handles patient data. And starting January 1, 2026, California AB 489 prohibits AI from implying it holds a healthcare license. If your practice uses an AI chatbot for patient inquiries, review its language. The template includes a HIPAA compliance checklist for Section 4.
Financial services and insurance
The NAIC model bulletin, now adopted by 24 states, requires a documented AI program for any insurer or financial services firm using AI in consumer-facing decisions. FINRA's 2026 report expects documented governance including prompt and output logs for AI-assisted client communications. The template includes a regulatory compliance module for Section 2.
Roll It Out This Week
You don't need a month. You need a week.
Monday: Send a 3-question survey to your team. Frame it as amnesty, not an audit. No consequences for honest answers. Ask: (1) What AI tools do you use for work? (2) What tasks do you use them for? (3) What data do you put into them? You'll be surprised by the answers.
Tuesday-Wednesday: Customize the template. Use the survey results to fill in your Approved Tools list, identify your biggest data risks, and write guidelines that match how your team actually works. Don't write policy for a business you wish you had. Write it for the one you're running.
Thursday: Hold a 30-minute team meeting. Walk through the policy. Explain the why behind each section. Show the insurance exclusion trend. Show the wiretap lawsuits. People follow rules they understand.
Friday: Everyone signs. Set a 90-day review date. AI tools change fast. New ones launch every week. A policy that doesn't get reviewed becomes a policy nobody follows. Put the review on the calendar now.
Get the AI Policy Template → Free, editable, built for businesses with 5-50 employees. Includes all five sections, industry add-ons for legal/healthcare/financial services, and a vendor evaluation checklist.
What's Next
You've got two options:
- Do it yourself. Download the template, customize it using the steps above, and roll it out this week. Everything you need is in the document.
- Get it built for your industry. If you want a policy tailored to your specific tools, your state's regulations, and your team's workflow, that's what we do. We'll build it, train your team on it, and set up your review cadence.
Already did our 15-Minute AI Audit? Your D (Data Safety) score tells you how urgent this is. If you scored a 2, start here.
Next up: The 10-Point AI Security Checklist for Small Businesses, covering the technical controls that make your policy enforceable.
