This article is about Read.ai but the same issues applies to the following apps:
❌ Otter.ai
❌ Fireflies.ai
❌ Fathom
❌ Avoma
❌ Grain
❌ MeetGeek
I’ve always been skeptical of third-party tools. Not because I hate innovation, but because I’ve seen too many “productivity enhancers” turn into security incidents. My default setting is simple: block first, ask questions later.
So, when a colleague mentioned a client had discovered Read.ai was doing a little more than advertised, casually joining meetings, recording conversations, and emailing summaries to unsuspecting participants, I wasn’t shocked. But I was curious enough to dig deeper.
What I found was a case study in viral Shadow IT.
What is Read.ai?
On the surface, Read.ai is just another AI meeting assistant. It joins Zoom, Teams, or Meet, transcribes the chat, and sends a summary.
The problem isn’t the transcription; it’s the distribution method. Once a single user in your organization signs up (usually via a deceptively innocent “Sign in with Microsoft” prompt), the tool gains calendar access via OAuth. It then starts auto-joining meetings, even those the user isn’t attending.
Here is the clever (and uncomfortable) part:
- Read.ai joins a meeting and records it.
- It emails a summary to everyone on the invite list, external or internal.
- A curious recipient clicks the link to view the notes.
- To see them, they are asked to sign in with Microsoft 365.
- Boom. A new account is created with Read.AI, calendar permissions are granted, and the bot now has access to their meetings.
It spreads like a digital cold. One click, and it propagates across the tenant. There is no big “Warning: You are installing a bot” banner, just a standard OAuth permission screen that most users click through without reading.
Why this is a problem
- Informed Consent: It records meetings where not everyone has agreed to be recorded. In the GDPR world, that is a massive liability.
- Data Leakage: It sends summaries to external clients and vendors automatically as long as they are invited to the meeting.. That internal project moan before the clients joined the meeting? Your client just got a transcript of it.
- Data Residency: Unless you have a specific enterprise agreement, that data is likely sitting on a server in the US, not your localized tenant.
- Trust: It undermines the expectation of privacy in internal team huddles.
And even if you spot it quickly and block it, the damage does not necessarily stop there.
Read.ai’s likely defence
To be fair, Read.ai would argue they operate within standard SaaS norms. The bot creates a visible participant in the meeting (it doesn’t hide), and users technically grant permission via OAuth. They view themselves as a neutral productivity tool, and the responsibility for privacy sits with the user who invited the bot.
That is a valid legal defense, but it’s a terrible operational reality. Intent doesn’t remove responsibility.
Cleaning up is not a quick fix
Blocking the app in Microsoft 365 stops the bleeding, but it doesn’t heal the wound. Removing the app from Entra ID does not delete the data Read.ai already harvested.
If a user wants their data gone, they have to:
- Log in to Read.ai (which they might not realize they have an account for).
- Manually delete the account.
- Request data erasure.
If this were only an internal cleanup job, it would be annoying but containable. But here, you are forced to contact your customers and partners, apologize, and ask them to do the same cleanup work. Every single person who got hit by this has to manually remove themselves from the service. It is both embarrassing and annoying for a huge number of people.
We are also seeing reports that these deletion requests aren’t always handled quickly. You are effectively relying on a third-party vendor, one you never signed a contract with, to honor a deletion request for data they shouldn’t have had in the first place.
How to block Read.ai
If you are responsible for keeping your Microsoft 365 environment even vaguely under control, this is not something you want to leave to individual user judgment. The good news is that you already have the tools to deal with it.
Here is a practical walkthrough using the Microsoft 365 security stack to find, contain, and remove Read.ai.
1. Microsoft Purview: Audit and Monitor Start by using Microsoft Purview to identify where Read.ai has already wormed its way in.
- Use Audit Search to look for Read.ai activity, calendar access, meeting joins, and data sharing.
- Set up Alerts for suspicious third-party app activity, especially anything poking around in calendars or joining meetings uninvited.
- Bonus: If you’re using Microsoft Defender for Cloud Apps, check the OAuth apps tab to see exactly who authorized it.
2. Microsoft Defender for Cloud Apps: Revoke Access Once you’ve identified Read.ai as the culprit, it’s time to cut the cord.
- Head to Defender for Cloud Apps → Cloud Discovery.
- Search for “Read AI” under Discovered Apps.
- Mark it as “Unsanctioned” to block access across your environment.
- Use Conditional Access App Control to prevent users from connecting to Read.ai from managed devices.
3. Microsoft Intune: Lock Down Devices Let’s not forget about device-level control. With Intune, you can:
- Use Administrative Templates (or Settings Catalog) to enforce browser policies that block unauthorized extensions in Edge and Chrome.
- Use App Protection Policies to prevent data from being shared with untrusted apps.
- Enforce compliance policies that restrict calendar access to only sanctioned applications.
4. Entra ID: Block the App at the Source If you’re using Entra ID (and you should be), you can:
- Go to Enterprise Applications.
- Find “Read AI” or “Read Meeting Navigator.”
- Delete the application. This revokes existing tokens immediately.
- Review your “User Consent Settings.” If users are allowed to consent to apps accessing company data on their own, turn that off. Require Admin Consent for OAuth apps.
Why users should not be allowed to consent to apps on their own
This entire situation exists because a single user was allowed to consent to a third party application with calendar access. OAuth consent is powerful, too powerful to be treated casually.
When users are allowed to approve apps themselves, you lose visibility and control instantly. Most users do not read permission scopes. They see a Microsoft sign-in screen, assume it is safe, and click accept.
In the case of Read.ai, that one click leads to automatic meeting recording, data transfer outside the tenant, and viral spread through meeting invites. This is not a user failure. It is a design and governance failure.
The fix is straightforward:
- Disable user consent for third party apps
- Require admin approval for all OAuth applications
- Review and approve apps deliberately, not retroactively
- Treat calendar access as high risk, because it is
If a tool is genuinely useful, it can survive a security review. If it relies on frictionless viral onboarding to succeed, that tells you everything you need to know.
Final Thoughts
Read.ai is the poster child for why IT governance exists. If you’re in the EU or anywhere else with data protection laws, letting a bot record meetings without clear consent is not just awkward, it could be illegal.
So, if you’re tired of digital gatecrashers, take back control. Use the tools you already have in Microsoft 365 to audit, block, and boot Read.ai before it turns your meetings into a privacy nightmare.
And use Microsoft Copilot for meeting transcripts, at least that’s the devil you know..
Blocking the app is necessary. Cleaning up user accounts is essential. But the real fix is locking down User Consent in Entra ID so the next viral tool doesn’t walk right through the front door.
Discover more from Agder in the cloud
Subscribe to get the latest posts sent to your email.

