Microsoft Copilot just got a new brain. Meet Claude, the AI model from Anthropic, now available in Copilot’s Researcher agent and Copilot Studio. It’s smart, it’s fast, and it’s great at deep reasoning. But before you flip the switch, let’s talk security with today’s #TechTipTuesday.
🚨 What’s New?
Anthropic’s models are now part of Copilot’s multi-model lineup. That means you can pick the best AI for the job—whether it’s summarizing complex docs, automating workflows, or building enterprise-grade agents. Flexibility? Yes. But it comes with a tradeoff.
🔐 What You Need to Know
Enabling Anthropic models in Copilot means your data leaves Microsoft’s managed environment. That’s right—once Claude is in play, you’re operating under Anthropic’s terms, not Microsoft’s. Here’s what changes:
- No Microsoft audit or compliance guarantees
- No data residency commitments
- No Customer Copyright Commitment
In short: you lose the cozy security blanket Microsoft usually wraps around your data. So if your org is big on governance, compliance, or legal protections, this is a red flag waving in neon.
🛠️ Admins, This One’s on You
Turning on Anthropic isn’t automatic—it requires admin approval in the Microsoft 365 Admin Center. You’ll need to:
- Opt-in at the tenant level (pictured)
- Configure usage per environment in Power Platform Admin Center
- Monitor data movement and model selection per agent
And yes, there’s a fallback to OpenAI if Anthropic gets disabled. But still—know what you’re signing up for.
