Every new technology wave brings chaos, but AI has turned it into an Olympic sport.
We’ve got organisations unleashing Copilot before they’ve classified a single file, leaders waving “digital transformation” decks like incense at an innovation altar, and users pasting sensitive data into random chatbots “just to test it.
Somewhere between the boardroom and the Teams chat, the adults left the room.
I’ve been talking about this for months, at MVP-Dagen, at customer events, at internal sessions, and yet it still feels like dejà vu every single time. The message doesn’t change: you can’t unleash AI responsibly if you don’t have control of your data.
And yet, here we are. New city, new stage, same wild optimism, same missing guardrails.

The problem: AI without adults
Let’s be honest, the tech isn’t the problem. AI is astonishingly capable, helpful, and terrifyingly literal. It’ll cheerfully summarise, extract, and share whatever it’s given, without a single moral pause. That’s not evil; that’s design.
The problem is the people deploying it like a new toy.
No data classification. No DLP. No clarity on who’s responsible for what.
Then comes the shock:
“Why did Copilot show that confidential file?”
Because you told it to index everything.
“Why did this prompt reveal payroll data?”
Because nobody set access boundaries.

We keep treating AI as if it’s a colleague with common sense. It’s not. It’s a very obedient intern, one who does exactly what you ask, even when what you ask is incredibly stupid.
Digital maturity ≠ a fancy licence
Buying Copilot doesn’t make you mature.
Having E5 doesn’t make you secure.
And sticking “AI strategy” in a PowerPoint doesn’t mean you have one.
Digital maturity means understanding how data moves, who owns decisions, and why context matters. It’s boring. It’s documentation, metadata, Purview policies, and training sessions where people sigh loudly.
And it’s also the difference between “AI as an assistant” and “AI as a data leak with a personality.”
The mature organisations aren’t the ones building the flashiest agents. They’re the ones that quietly fix their foundations before they press Play.
Leadership: the missing grown-ups
Let’s talk about leadership.
Because somewhere along the way, “innovation” became everyone’s job, except the part where it requires responsibility.
Too many leaders have hit the big Play button on Copilot without setting rules of engagement. No clarity on what’s allowed, what’s confidential, or what happens when it inevitably goes wrong.

Here’s the reality: governance isn’t a blocker, it’s an enabler. It’s the seatbelt that lets you drive faster without dying. And every time leadership skips it, they’re not empowering innovation, they’re gambling with reputation.
At MVP-Dagen I joked that “AI doesn’t break rules, it just follows the bad examples you set.” People laughed. But it wasn’t really a joke.
The responsibility gap
We’ve built AI into everything, from PowerPoint to Outlook, but we haven’t built responsibility into everyone.
AI doesn’t make decisions. People do.
Yet somehow, when the output goes wrong, it’s always “the AI’s fault.”
As I said time and time again: AI isn’t the problem. YOU ARE!
The ethical and operational guardrails must come from humans, the ones who understand context, confidentiality, and consequence. The ones who see beyond the demo and think about data lineage, consent, and long-term risk.
The best part? Microsoft has already given us the tools. Purview, DLP, Insider Risk, Conditional Access, the digital seatbelts are there. We just need to fasten them before we crash the car.
What comes next
I’ve said it before; governance isn’t glamorous, but it’s the backbone of every responsible AI journey.
So instead of just shouting into the void (again), I’m turning this into a series: a proper, grown-up walkthrough of what it actually takes to make Microsoft 365 secure, structured, and Copilot-ready.
Over the next few weeks, me and some of my good friends, will break down the practical side of all this, without the corporate fluff.
We’ll dig into things like:
- Microsoft Purview essentials – labels, DLP, insider risk, and data boundaries that actually make sense.
- SharePoint clean-up and metadata sanity – because Copilot can’t read your mind through a folder called “New folder (2)”.
- Conditional Access and “Secure by Default” setups – why AI needs them, and how to implement them without breaking your users.
- Copilot readiness – what “responsible adoption” really means before you press Play.
This isn’t just about security – it’s about building digital maturity with a bit of humour and a lot of honesty.
So yes, there’s more coming. The grown-ups are staying in the room, and we’re finally cleaning the place up.
Earlier posts on the Purview subject:
- Get Copilot ready(-ish) – with Microsoft Purview – Agder in the cloud
- Get Copilot ready(-ish) – with labels – Agder in the cloud
- Get Copilot ready(-ish) – with retention policies – Agder in the cloud
- Get Copilot ready(-ish) – Sensitive info types – Agder in the cloud
So what does being a grown-up look like?
It looks like saying “no” to shortcuts.
It looks like labelling data before feeding it to Copilot.
It looks like teaching people that prompts are powerful, and context isn’t free.
It looks like asking the uncomfortable questions, the ones that stop a cool idea from becoming a compliance nightmare.

AI doesn’t need more pilots. It needs principles.
It doesn’t need enthusiasm. It needs ethics.
And it certainly doesn’t need more chaos disguised as “agility.”
So yes, let’s embrace AI. Let’s innovate, automate, and let Copilot do the heavy lifting.
But for everyone’s sake, let’s also make sure the grown-ups stay in the room.
Discover more from Agder in the cloud
Subscribe to get the latest posts sent to your email.

