You're Being Watched (and you don't even know it)
The hidden risk of AI notetakers in your meetings
Imagine this.
You’re on a call with a few trusted friends or cowokers. Maybe it’s a prayer update, or a planning meeting for an upcoming strategy session. You’re using Microsoft Teams, or maybe Zoom, or Google Meet. You know most everyone on the call. You’re comfortable. You’re transparent. You speak freely.
And then… there it is.
At the bottom of the participant list is a name like “Otter Assistant” or “Meeting Notetaker.” Or worse—just a first name that you don’t recognize. You brush it off. Probably a new person you don’t know. No big deal.
Except it is a big deal.
Because that “person” in your meeting isn’t a person at all. It’s an AI tool. One that’s been invited—often without your knowledge or consent—to record everything you say, transcribe it, and send it somewhere. But where?
That’s the question we all need to start asking. It’s super convenient to have those notes sent to everyone that was in the meeting. It’s awesome to have deliverables and ToDo items sent to the appropriate person… but what’s the cost?
The Rise of the AI Eavesdropper
AI notetakers have grown in popularity over the past few years. My first experience with one was several years ago. I absolutely loved it! Otter.ai, Fireflies, Fathom, Read.ai, and others have taken the industry by storm. These tools promise to make meetings more productive by recording audio, creating transcripts, tagging action items, and even summarizing decisions.
For secular companies working on quarterly goals, that sounds helpful. For global mission workers, nonprofit leaders, and people always trying to be more productive while saving money (and headcount), it’s a great help. Have you ever thought about how this time and money saving tool could actually be a huge threat?
Why?
Because when you allow an AI tool into a meeting, you’re not just inviting a helpful assistant—you may be exporting sensitive data to a server you don’t control, owned by a company you don’t know or trust, in a country with laws you don’t understand. Perhaps, the country where the data is housed is a hostile government.
The biggest threat is that sometimes you don’t even know the tool is there.
What Could Go Wrong?
Here’s what’s at stake:
You don’t know where the data is going. Some AI note tools send transcripts to servers in countries with aggressive data harvesting laws. You may be feeding sensitive conversations directly into a pipeline of surveillance. Whether it is mission sensitive data for faith based groups or intellectual property for secular companies, that data is likely being mined.
You may violate local laws. In many countries, recording a conversation without the explicit consent of all participants is illegal. Having an AI note-taker silently listening can land you—and your partners—in legal trouble.
You may expose field workers. In the missions world, names, places, and strategies must be carefully protected. A careless meeting invite could compromise an entire team—or get someone denied reentry at a border checkpoint.
You undermine trust. Recording people without telling them erodes trust. Trust is hard to build and easy to break.
Common Sense Disclosure and Due Diligence
This is more than a tech or an “AI” issue. It’s a trust issue. A safety issue. A stewardship issue.
Here’s what I’m advocating:
Default to transparency. Before every meeting, state clearly: “This meeting is not being recorded” or “We’re using an AI notetaker today—everyone okay with that?” Ask. Don’t assume.
Ban unknown participants. If someone joins the meeting and you don’t recognize the name, ask. “Hi, can you introduce yourself?” If it’s an AI bot, remove it unless everyone has agreed to its presence.
Train your team. Most people don’t understand the risks of these tools. Start talking about it. Make it part of your onboarding, your digital security training, and your leadership discussions.
Choose trusted tools. If you use AI assistants, vet them carefully. Where is the data stored? Who has access? Can you self-host it? If the answers are unclear, find a better option. Build tools inside of trusted apps you are already using. If you have an Information Security team, have them vet tools.
Mission Security in the Age of AI
For global workers, the threat isn’t just theoretical. We know that governments are listening. What you say on a call may come back to you the next time you cross a border. AI isn’t just writing summaries. It’s feeding data into systems that may work against you. This is why it is imperative to only use trusted AI tools.
We spend a lot of time and effort on physical security, but digital security is just as important.
You may think your AI assistant is just there to help take notes.
But in today’s world, it might also be helping someone else take names.
If you’ve had a situation where an AI tool joined a call without your knowledge—or if you’ve found a secure alternative—I'd love to hear from you.





So, the obvious question I have is simply how? where do I go to vet the AI note taker?