Right now, one in three things your employees type into AI tools is sensitive. Client names, financial records, source code, health information. Cyberhaven tracked 7 million workers and found that 34.8% of all data put into AI tools qualifies as sensitive, up from 10.7% just two years ago.
Meanwhile, in a survey of 662 small business owners, 81% reported suffering a security breach in the past year. AI-powered attacks were the root cause in more than 40% of those incidents.
Most AI security guides assume you have a security team. You don't. You have 10, maybe 30 people, and they're already wearing three hats each.
We built this checklist because we've spent years in cybersecurity watching businesses make the same preventable mistakes. These are the 10 things that actually matter, how long each one takes, and what happens if you skip them.
The 10-Point AI Security Checklist
1. Know What AI Tools Your Team Is Using
78% of AI-using employees use tools their employer hasn't approved. And it's not just the junior staff. The same research found that 53.4% of C-suite leaders hide their AI habits, too.
You can't secure what you can't see.
What to do (30 minutes):
- Send a 3-question survey (Google Forms or a plain email works) to your team: What AI tools do you use? What tasks? What data goes in? Frame it as amnesty, not an audit.
- Check credit card statements and expense reports for AI subscriptions. Don't forget browser extensions that use AI.
- Create an approved tools list with three tiers: approved, restricted, and prohibited. (We covered how to build one in our AI policy template.)
- Make sure every AI account has a unique password and two-factor authentication enabled. If your team shares a single ChatGPT or Claude login, stop. Shared accounts mean no audit trail and one person can see everyone else's conversation history.
2. Classify Your Data Before It Touches AI
In 2023, Samsung engineers pasted proprietary source code and chip designs into ChatGPT on three separate occasions. That data was submitted while OpenAI was training on user inputs by default. Once ingested, it cannot be retrieved.
The fix is simple: decide in advance what can and can't go into AI tools.
What to do (15 minutes):
| Level | Label | Examples | AI Rule |
|---|---|---|---|
| 1 | Public | Marketing copy, published pricing, blog drafts | Any AI tool |
| 2 | Internal | Meeting notes (no client names), process docs | Approved tools only, training off |
| 3 | Confidential | Client details, financials, vendor contracts | Business-tier AI tools only. Anonymize first. |
| 4 | Restricted | SSNs, passwords, health records, credit cards | Never enters any AI tool. No exceptions. |
Print this table. Walk your team through five real examples from your own business. That's it.
3. Turn Off Training on Every AI Tool
As of October 2025, every major AI vendor trains on your conversations by default on free and consumer-paid plans. ChatGPT and Gemini have done this for years. Microsoft Copilot, too. Anthropic was the last holdout, but switched Claude to training-on-by-default in October 2025.
That means there are no privacy-forward exceptions left among the major tools. But turning it off takes 30 seconds per tool.
Where to find the toggle:
- ChatGPT: Settings > Data Controls > "Improve the model for everyone" > Off
- Claude: Settings > Privacy > "Help improve Claude" > Off
- Gemini: Settings (gear icon) > Activity > "Keep activity" toggle > Turn Off
- Copilot: Profile > Settings > Privacy > "Training on conversation activity" > Off
Data you already submitted while training was on may have been used. You can't retroactively remove it. Turn this off today.
Free tier vs. business tier: If your team handles regulated data (health, legal, financial), free-tier opt-outs aren't enough. Business and enterprise plans don't train on your data by default. No vendor offers a HIPAA Business Associate Agreement on a free or consumer plan.
| What You Get | Free Tier | Business/Enterprise Tier |
|---|---|---|
| Uses your conversations for training | Yes (all vendors) | No |
| One login for your whole company (SSO) | No | Yes |
| Can see who used AI and when (audit logs) | No | Yes |
| HIPAA compliance available | No | Yes |
| Admin can manage team access | No | Yes |
4. Secure Your API Keys
Skip this item if your team only uses AI through web browsers (ChatGPT, Claude, etc.). This applies if you or a developer have built custom AI integrations, automations, or chatbots that use API keys.
In March 2025, an xAI developer accidentally pushed a .env file to a public GitHub repository. The leaked key provided access to over 60 private AI models, including models trained on SpaceX and Tesla data. It stayed live for two months before anyone noticed.
What to do (20 minutes):
- Never hardcode API keys in your code. Store them in environment variables.
- Set spending limits on every AI platform. One stolen key led to an $82,000 Gemini bill in 48 hours.
- Use a password manager (1Password, Bitwarden) to store API keys.
- If you hired a developer to build AI tools for your business, ask them: "Where are the API keys stored? Are they in a password manager or environment variables, not in the code itself?"
5. Vet Your AI Vendors
In August 2025, attackers exploited stolen OAuth tokens from Salesloft's Drift AI chatbot to steal data from over 700 organizations, including Cloudflare, Google, and Palo Alto Networks. An AI chatbot became the entry point.
Before you sign up for any AI tool, ask five questions:
- Does this vendor train on my data? (Check their terms, not their marketing page.)
- Where is my data stored? (Country matters for compliance.)
- Does the vendor have SOC 2 Type II certification? (This is the baseline.)
- Can I get a Data Processing Agreement or BAA? (Required for health and legal data.)
- What happens to my data if I cancel? (Deletion timeline should be in writing.)
To find answers, search for "training" or "data use" in the vendor's terms of service. Look for a Data Processing Addendum, usually linked from their privacy policy. If a vendor can't answer these clearly, they're not ready for your business data.
6. Train Every Employee on AI Security
38% of employees share sensitive work data with AI tools without telling their employer. Among Gen Z, that number is 46%.
This isn't malicious. It's a knowledge gap. People don't know what "sensitive data" looks like in practice.
What to do (1 hour, once):
- Walk through the data classification table from Item 2 using real examples from your business.
- Show the Samsung story. Show the ChatGPT conversations that got indexed by Google in July 2025, where thousands of private conversations became searchable. These stories stick.
- Use a simple rule: if you wouldn't email it to a stranger, don't paste it into an AI tool.
- Repeat this annually. New tools and new risks show up constantly.
7. Defend Against AI-Powered Phishing and Deepfakes
In January 2024, a finance employee at engineering firm Arup joined a video call where every other participant was a deepfake. The fake CFO, fake colleagues, synchronized voices and facial movements. The employee authorized a $25.6 million wire transfer to fraudsters.
KnowBe4's 2025 research found that roughly 83% of phishing emails now show signs of AI involvement. And AI voice clones can be created from just a few seconds of sample audio.
What to do (15 minutes):
- Set a policy: no wire transfers or sensitive actions authorized by video call, email, or voice alone. Require a callback to a known number.
- For transfers above a dollar threshold (you pick the number), require two people to approve.
- Tell your team: if a request feels urgent and unusual, that's exactly when to slow down and verify through a separate channel.
8. Check Your Insurance Policy
In January 2026, Verisk released new endorsements that let insurers exclude AI-related claims from general liability coverage. Nine insurance groups have already filed to adopt them. WR Berkley went further with an "absolute AI exclusion" that eliminates D&O, E&O, and fiduciary liability coverage for anything involving AI.
If your business uses AI and something goes wrong, your policy might not cover it.
What to do (30 minutes):
- Pull your current CGL, E&O, and cyber policies. Search for "artificial intelligence," "generative AI," or "machine learning" in the exclusions section. If you can't find your policies, call your broker and ask them to email you the current exclusion endorsements.
- Ask your broker directly: "Do any of my policies contain AI exclusions? If so, what's not covered?"
- Document your AI security practices. Insurers are increasingly tying premiums to governance maturity. This checklist is a start.
We covered the insurance angle in depth in our AI policy post.
9. Know Your State's AI Rules
State AI laws are moving fast. Three things to know right now:
Texas (effective January 1, 2026): The Responsible AI Governance Act has real penalties, but also a real upside: businesses that follow the NIST AI Risk Management Framework get an affirmative defense. Follow the framework, get legal protection.
Colorado (effective June 30, 2026): The Colorado AI Act requires impact assessments if you use AI for employment, lending, insurance, or healthcare decisions. Businesses with fewer than 50 employees who don't build their own AI models are exempt from most of it.
California (effective January 1, 2027): New CCPA regulations will require privacy risk assessments when AI is used in significant decisions, and give consumers the right to opt out.
Even if you're not in these states, they set the direction for where regulation is headed. You don't need to become an expert. You need to know which rules apply to your state and build that into your quarterly review.
10. Schedule a Quarterly Review
AI tools don't sit still. ChatGPT updated its data policies three times in 2025 alone. Anthropic flipped its training default. Google renamed its features. Your checklist is only as good as its last review.
What to check each quarter (2-3 hours):
- Has any AI vendor changed their data or training policy?
- Has your team started using any new AI tools since the last review?
- Have any employees left who had access to AI accounts? Revoke their access.
- Are there new regulations in your state? (Check the NCSL AI legislation tracker.)
- Update your approved tools list and data classification if anything changed.
Put it on the calendar now. If it's not scheduled, it won't happen.
If you discover that an employee has already pasted client data, health records, or confidential information into a free-tier AI tool: (1) turn off training immediately on that account, (2) contact the vendor and request data deletion, (3) document exactly what was shared and when, and (4) notify affected parties if required by your industry or state breach notification laws. The sooner you act, the better.
How Did You Do?
Go through the 10 items above. Count how many your business has fully completed.
The Microsoft Data Security Index found that only 47% of organizations are implementing AI-specific security controls. If you've completed 7 or more items, you're ahead of most. If you're at 3 or fewer, start with Items 1, 2, and 3 today. They're free and take under an hour combined.
What's Next
You've got two paths:
- Do it yourself. Work through this checklist over the next week. Most items are free and take under an hour. Start with Items 1-3 today.
- Get a security review. If you want a professional assessment of your AI setup, that's what we do. We'll audit your tools, identify the gaps, and build a security plan tailored to your business.
Haven't created your AI policy yet? Start with our AI policy template for small businesses. The policy makes this checklist enforceable.
Already did our 15-Minute AI Audit? Your D (Data Safety) score tells you how many of these items you're missing.
