A practical guide for small businesses trying to sort out which AI tools are safe, which are worth paying for, and what to do about the employees who are already using AI on their personal accounts.
The Conversation You’re Already Having
If you run a small or mid-sized business in Colorado right now, you are probably having some version of the same conversation every few weeks. A staff member mentions they’ve been using ChatGPT to draft emails. Someone else asks if they can upload a client document to Claude. Your compliance officer forwards you an article about AI data leaks. Microsoft keeps pushing Copilot into your inbox. And somewhere in the background you are wondering whether you’re behind, whether you’re exposed, and whether the whole thing is going to settle down soon enough that you can safely ignore it for another quarter.
It isn’t going to settle down. And ignoring it is the most expensive option on the table.
We’ve had this exact conversation with enough clients in the last six months that it felt worth writing down, because the decisions in front of you are simpler than the vendor marketing makes them seem — but they do require you to understand a few things that most of the coverage doesn’t explain well. This is our attempt to lay it out plainly.
Why “Just Use Copilot” Isn’t a Good Enough Answer Anymore
For the last year or so, the standard IT answer to “what AI tool should we use” has been some version of “stick with Microsoft Copilot because it’s integrated with your Microsoft 365, it adheres to Microsoft’s security standards, and it’s the safe choice.” That answer was defensible when the AI landscape was genuinely new. It isn’t anymore.
Copilot is good at what it does. If you want AI to help draft an email in Outlook, summarize a Teams meeting, clean up a PowerPoint deck, or write an Excel formula, Copilot handles those tasks well because it lives inside the app you’re already using. It is built for in-app productivity, and that is a real and valuable thing.
What Copilot is not particularly good at is the kind of work most people actually want AI for — research, analysis, reasoning through a complex problem, summarizing a long document, helping you think through a decision, coding support, or any task that requires the AI to actually be smart rather than just fast. Most users who sit down with Copilot for the first time walk away underwhelmed, because the use cases Copilot is best at are not the use cases they had in mind.
What happens next is predictable. Your staff, having been underwhelmed by the sanctioned tool, opens up a personal ChatGPT or Claude account on their phone or home laptop, and quietly starts pasting in the work they actually need help with — client memos, draft contracts, financial models, internal communications. They’re not trying to cause a problem. They’re trying to do their jobs. But they are routing company data through tools you have no visibility into and no contract with. The industry calls this “shadow AI,” and it is already happening in almost every business we work with, regardless of what the official IT policy says.
This is the actual AI risk in your environment. Not the tools on your approved list — the tools your people are using when nobody’s looking. And the answer isn’t to crack down harder. The answer is to give them a sanctioned alternative good enough that they don’t need to go around you.
The Two Tools Worth Knowing About
There are a lot of AI products on the market, but for most small businesses the practical landscape comes down to two: Microsoft Copilot and Claude, which is made by a company called Anthropic. These do different jobs, and the right answer for most clients is both, not either.
Microsoft Copilot is the in-app productivity tool — the assistant that lives inside Word, Excel, Outlook, Teams, and PowerPoint. If your use case is “help me draft this email” or “summarize this meeting I missed” or “pull together a deck from this document,” Copilot is the right tool, because it’s already inside the app where the work is happening. You don’t have to copy and paste anything anywhere, and it can reference your company’s own documents and communications when you turn on the paid version.
Claude is the thinking tool. It lives in its own window — a chat interface on the web, a desktop app, a mobile app — and it’s designed for the work that doesn’t happen inside a specific Microsoft app. Research. Analysis. Working through a complicated decision. Reviewing a long document. Drafting something from scratch that requires actual judgment. Coding help. Claude tends to be noticeably better than Copilot at these tasks, and that’s why employees keep gravitating toward it (or toward ChatGPT, which is broadly similar) even when their company is paying for Copilot.
Thinking of these as competitors misses the point. They’re more like a word processor and a calculator: you can use either one for either job, but you’ll get dramatically better results if you use the right tool for the right thing. Most businesses we work with end up with Copilot for the in-app work and Claude for everything else, governed under a single policy.
The Data Safety Conversation You Actually Need to Have
When clients ask us “is it safe to put our data in this thing,” they are usually stacking three different questions on top of each other. Untangling them makes the rest of the conversation much easier.
Question 1: Will the AI vendor use your data to train their models?
This is the thing most of the alarming headlines are about. On the paid business tiers of Copilot, Claude, and ChatGPT, the answer is no — and it’s no contractually, not just as a setting that someone might accidentally toggle. On the free and consumer tiers, the answer ranges from “only if you opt in” to “yes by default unless you opt out.”
This is the single most important thing to understand, because it’s the line between safe and risky usage, and it runs through the middle of every one of these products. A personal Claude account used with work data is categorically different from a company Claude Team or Enterprise account used with the same data. Same interface, same model, completely different data policy behind it.
Question 2: Can other people inside your company see what you typed in?
This is where people’s intuitions tend to be off. These tools don’t maintain a shared organizational brain that absorbs everyone’s prompts and surfaces them to the next person who logs in. Each user’s conversations are private to them by default.
Where things get more nuanced is with shared workspaces — Claude calls them “Projects” and Copilot surfaces documents from SharePoint — both of which respect whatever permissions already exist in the underlying system. Which means that if your SharePoint permissions are a mess, Copilot will surface things it shouldn’t. Cleaning that up is part of any competent AI rollout, and it’s work most businesses should be doing anyway.
Question 3: Can you audit what happened after the fact?
If your industry has regulators who might want to review AI-assisted communications — financial services, healthcare, law, anyone subject to records-retention rules — this matters enormously.
On the enterprise tiers of these products, you get audit logs and programmatic access for your security team or outside auditors to review. On the lower tiers, you don’t. For regulated businesses, this alone is usually enough to justify the enterprise tier, regardless of what anyone tells you about the baseline version being “good enough.”
What You Can Actually Put Into These Tools
The practical question most employees are walking around with is “am I allowed to paste this into Claude?” And the honest answer, on a properly licensed business tier, is: more than you probably think.
Safe for business-tier AI tools:
Internal memos, draft contracts, financial models, project notes, business strategy, HR policies, technical documentation, code, marketing copy — all of it is fine to work with in a tool covered by a standard data processing agreement, which is what the business tiers provide. This is the same posture you’d take with your cloud file storage, your CRM, or your email provider. The AI tool is a vendor processing your data on your behalf, not a counterparty you’re handing your data to.
Things to be careful with:
The things to be careful with are the same things you’d be careful with anywhere else. Personally identifiable information like Social Security numbers, dates of birth, or account numbers should generally be minimized — not because the AI can’t handle it, but because minimizing sensitive PII exposure across every system is just good hygiene.
Payment card data shouldn’t be in there at all. Protected health information requires a Business Associate Agreement in place before it goes anywhere near an AI tool, and only the enterprise tiers offer one. And if you have an NDA with a client or a counterparty that specifically restricts the use of cloud processors, check that first.
The goal is to give your team a clear policy they can actually follow in the moment, not a blanket “don’t put anything in” rule that they’ll quietly ignore the first time they need to get something done.
How We Approach AI Rollouts at Castle Rock Sky
When we help a client stand up a sanctioned AI deployment, we tend to break the work into four phases, and the most important one is the first.
Phase 1: AI Readiness Assessment (1-2 weeks)
We start with a short readiness assessment. Usually one to two weeks of work, where we:
- Review your data landscape and classification
- Discover which AI tools your staff is actually already using (you will be surprised)
- Write you an AI Acceptable Use Policy
- Classify your data so your team knows what can go where
- Recommend an approved tool list
This is billable work that stands on its own merit — most businesses should have this policy regardless of what AI tools they end up deploying — and it’s the foundation everything else gets built on.
Phase 2: Tool Deployment
From there, if a broader deployment makes sense, we roll out whichever combination of tools fits your situation. For most of our clients, that means Copilot for the in-app productivity work and Claude Enterprise for the thinking work, with:
- Single sign-on tied to your existing Microsoft 365 identity system
- Audit logs routed to your security stack
- A Project structure inside Claude that mirrors how your teams actually work
- Admin training and end-user kickoff sessions
Phase 3: Custom Integrations (when needed)
For clients with specific line-of-business systems — property management software, industry CRMs, document repositories, accounting platforms — we can build custom integrations that let Claude reach into those systems while still respecting the permissions you already have in place. Most MSPs in our market don’t do this kind of work. It’s one of the reasons our more sophisticated clients end up with us.
Phase 4: Ongoing Governance
Because the AI landscape changes faster than anyone’s acceptable use policy can keep up with, we add a modest monthly line item for ongoing governance:
- Reviewing audit logs for policy violations or unusual patterns
- Updating the policy as regulations evolve
- Surfacing new features you should enable or restrict
- Providing a quarterly executive view of what’s happening
This keeps you compliant and keeps us accountable as your trusted advisor rather than just the people who installed the tool.
What to Do Next
If you haven’t yet, ask your team what AI tools they’re actually using. Not what they’re allowed to use — what they’re actually using. The gap between those two answers is your starting point.
If shadow AI is already happening in your organization (and it probably is), you have three options:
- Ignore it — and accept the risk that sensitive data is flowing through consumer-tier AI tools with no visibility, no contracts, and no recourse
- Ban it — and watch your team find creative workarounds while you lose the productivity gains AI actually offers
- Govern it — give your team sanctioned tools good enough that they don’t need to go around you, with policies clear enough that they know what’s allowed
The third option is the only one that actually works.
Get Started With an AI Readiness Assessment
If you’d like us to help close that gap, reach out. The AI readiness assessment is a good first conversation, and it’s scoped small enough that it doesn’t require a big commitment upfront. We’d rather spend two weeks helping you understand what you’re working with than sell you a tool you don’t need.
And in our experience, once you have that clarity, the rest of the decisions tend to make themselves.