· Michael Avdeev · Insights · 4 min read
You Pasted Company Data Into ChatGPT This Week. Here’s What Happened To It.
A customer list. A spreadsheet. An internal doc you needed summarized. Maybe a support ticket with someone’s name and email in it.
Takes five seconds and feels like a private conversation — but it isn’t.
The Default You Never Changed
If you’re on a free or Plus ChatGPT account, OpenAI can use your inputs to train future models. That’s the default. You have to manually opt out in settings, and most people never do.
Your conversations are retained indefinitely unless you delete them. Even then, OpenAI takes up to 30 days to actually purge them.
And in 2025, a federal court order forced OpenAI to stop deleting anything — even conversations users had already removed — for months while the NYT lawsuit played out.
That customer list you pasted in March? It might still exist on OpenAI’s servers because of a lawsuit you’ve never heard of.
Your SaaS Tools Added AI Without Telling You
It goes further.
Dozens of SaaS tools you already use — project management, support platforms, HR systems — have quietly added OpenAI as a subprocessor. Your data is flowing to OpenAI through apps that didn’t use AI when you signed up.
The terms changed. The subprocessor list updated. But nobody sent you an email saying “hey, we’re now sending your data to OpenAI.”
That Notion doc. That Zendesk ticket. That performance review in your HR platform. Check the subprocessor lists. You might be surprised what’s flowing where.
Enterprise vs. Reality
Enterprise and API accounts are different. No training by default. Configurable retention. Zero data retention options available.
But most employees aren’t on enterprise accounts.
They’re on personal ChatGPT tabs at work, pasting in whatever they need summarized or rewritten. They’re using the free tier because IT never provisioned an enterprise account. They’re using Claude or Gemini or Copilot through personal logins because it’s faster than filing a ticket.
That gap between “what the enterprise plan protects” and “what people actually do at their desks” is where sensitive data leaks. At every company. Every day.
What’s Actually Getting Pasted
We’ve scanned environments where employees paste data into AI tools. Here’s what shows up:
- Customer lists with names, emails, phone numbers, account values
- Internal financials being summarized for presentations
- Support tickets with full customer PII in the body
- Code snippets with hardcoded credentials and API keys
- HR documents with employee SSNs and salary data
- Legal drafts with confidential deal terms
- Medical records when healthcare workers need quick summaries
Most of this happens on personal accounts with training enabled. None of it shows up in your DLP logs because it’s browser-based, copy-paste, no file transfer.
The Question You Can’t Answer
Do you know how many of your employees are pasting company data into AI tools on personal accounts right now?
Not a guess. An actual number.
Most security teams can’t answer this. They have endpoint agents that watch file transfers. They have email DLP that scans attachments. They have CASB that monitors sanctioned SaaS apps.
But someone pasting a spreadsheet into chat.openai.com from their browser? Invisible.
What To Do About It
1. Inventory what’s already out there. Before you can control future leakage, understand what sensitive data exists in files that could get pasted. If you don’t know where your customer lists and credentials live, you can’t protect them.
2. Check your SaaS subprocessor lists. Go through your critical vendors. Look at their subprocessor pages. You might find OpenAI, Anthropic, or Google listed on tools that didn’t have AI features when you signed up.
3. Provision enterprise AI accounts. If people are going to use AI (they are), give them a sanctioned way to do it. Enterprise tiers with training disabled and configurable retention. Make the secure path the easy path.
4. Train on what not to paste. Most employees don’t know the difference between enterprise and personal AI accounts. They don’t know training is on by default. A five-minute explanation prevents a lot of exposure.
5. Accept you can’t block everything. Browser-based AI usage is nearly impossible to fully block without breaking productivity. Focus on reducing what’s sensitive enough to matter, not on achieving perfect control.
The AI data governance problem isn’t hypothetical. It’s happening right now, in browser tabs across your company, with data you can’t see leaving.
Risk Finder scans your environment for sensitive data — PII, PHI, credentials, financial records — so you know what’s at risk before it gets pasted somewhere it shouldn’t be.