Claude arriving in Copilot Chat has created a predictable wave of excitement, and an equally predictable wave of confusion. Part 1 covered the compliance and data‑residency questions you must answer before enabling Claude. This part goes deeper: what Claude actually does differently, how it compares to GPT, and why organisations keep misunderstanding what “multiple models in Copilot” really means.
This is the architectural conversation most organisations haven’t had yet.
Claude vs GPT
Strengths, weaknesses, and why capability still doesn’t override governance.
Claude and GPT are both strong models, but they’re strong in different ways, and those differences matter when you’re choosing which model is allowed to process your data.
Claude seem to excel at:
- long‑form reasoning
- structured analysis
- summarising complex documents
- staying consistent over long outputs
- coding, debugging, and refactoring
- understanding large codebases and multi‑file context
GPT seem to excel at:
- broad general knowledge
- creativity and ideation
- conversational flexibility
- generating examples, drafts, and exploratory content
Both are impressive. Neither is “the best at everything”. And none of this matters if the model isn’t allowed to touch your data in the first place.
Capability is interesting. Governance is mandatory.
What organisations consistently misunderstand
After working with a ton of projects on Copilot readiness, the same misconceptions appear again and again:
- “Copilot is one model.”
It isn’t. It’s now a platform with multiple models, some native, some external. - “GPT and Claude are the same type of model.”
They’re not. GPT in Copilot Chat is Microsoft‑hosted. Claude is not. - “If it’s in Copilot Chat, it must be safe.”
Only if you define “safe” as “we didn’t check”. - “EU Data Boundary applies to everything.”
It doesn’t. External models don’t inherit Microsoft’s compliance posture. - “Sensitivity labels will protect us.”
They help, but they don’t control where the model runs. - “We’ll enable it and figure out governance later.”
Later is when the audit happens.
What you actually need to decide
Before enabling Claude, or any external model, organisations need to define:
- Which models are allowed by default
- Which data categories each model may process
- Who approves external model usage
- How prompts and responses are logged and monitored
- How the DPIA is updated to reflect external processing
- How users are informed about model boundaries
This is not a setting you click and forget about. It’s a governance decision that changes your data‑flow and your compliance story.

What this really means for your data
This is the part too many organisations skip, or worse, assume someone else has already handled.
When you enable an external model like Claude inside Copilot Chat, you’re not just “adding another option”. You’re creating a new data exit point. That means:
- Your prompts may leave the Microsoft 365 boundary.
Microsoft‑hosted GPT stays inside the M365 substrate. Claude does not. If you allow Claude, you’re explicitly allowing data to be processed by a second vendor. - Your compliance posture changes the moment you enable it.
Everything you’ve built around Purview, sensitivity labels, DLP, retention, insider risk, all of it assumes Microsoft is the processor. External models break that assumption unless you redesign your governance. - Your residency guarantees no longer apply universally.
EU Data Boundary is not a magic blanket. It applies to Microsoft‑hosted models. External models follow their residency rules, not yours. - Your audit trail becomes more complex.
You now need to track which model processed what, when, and why. If you can’t answer that in an audit, you don’t have control, you have hope. - Your users won’t know the difference unless you tell them.
And if they don’t know, they’ll paste whatever they want into whichever model looks friendliest. That’s how “data on the loose” happens, not through malice, but through UX.
This is why model choice is not a feature preference. It’s a data‑flow decision with real governance consequences.
A practical model‑selection matrix
Here’s a simple way to think about model choice inside Copilot Chat:
| Scenario | Recommended Model | Why |
|---|---|---|
| Sensitive or regulated data | Microsoft‑hosted GPT | Stays inside the Microsoft 365 boundary |
| Coding, debugging, refactoring | Claude | Stronger reasoning and code handling |
| Creative ideation | GPT | More flexible and generative |
| Long‑form analysis | Claude | Better consistency and structure |
| Anything requiring strict residency | Microsoft‑hosted GPT | Boundary guarantees apply |
This isn’t about which model is “better”. It’s about which model is appropriate.
Practical interpretation
If you choose Microsoft-native models
- You stay within the Microsoft 365 boundary
- Your compliance story holds
- Your Purview strategy works
- Your security team breathes normally
If you enable Claude
- You introduce a new data processing path
- You now depend on two vendors
- Your EU boundary story becomes… flexible
- Your governance needs to grow up fast
Claude is powerful.
GPT is powerful.
But neither is “safe by default”.

Final thoughts
Claude in Copilot Chat is a fantastic capability, if you understand what you’re signing up for. If you don’t, it’s not a feature. It’s a liability.
The real risk isn’t Claude. It’s the little assumption that “well, it’s in Copilot, so it must be safe”. That’s how data wanders off.. And let’s be honest: the same thing happens when users go hunting for “better” models on the internet, ChatGPT, Gemini, whatever shiny thing they found along the way. If you haven’t considered that behaviour, you’re already behind.
Data doesn’t go on the loose because someone is reckless. It goes on the loose because people don’t understand the consequences of choosing the wrong model, or where their prompt actually ends up.
If you enable external models without boundaries, approvals, data categories, or user guidance, you’re not innovating. You’re gambling. And when the audit lands, “but everyone else enabled it too” will not be the winning argument you think it is.
The organisations that get multi‑model Copilot right aren’t the ones chasing shiny capabilities. They’re the ones who treat model selection as governance, not enthusiasm.
Your governance is the safety. If you don’t build it intentionally, you’ll meet the consequences later, usually when someone asks a very simple question you suddenly can’t answer.
Discover more from Agder in the cloud
Subscribe to get the latest posts sent to your email.

