Direct Answer
The default answer at most organizations is “don’t do it” — which employees ignore because it makes their jobs harder without offering an alternative. A better approach is a tiered data policy that is specific enough to follow: define exactly which data classifications are permitted in which tools, under which contractual conditions, with what redaction requirements. Policy without a sanctioned workflow drives shadow usage, which is far more dangerous than governed usage.
Deeper Answer
The core problem is that most AI data policies were written by legal teams for the 2023 threat model — employees naively pasting customer PII into public ChatGPT. The threat model has since evolved. Employees are now using AI inside Microsoft 365, Salesforce, Slack, and dozens of other platforms where the data processing terms are different from consumer AI tools. A policy that says “no customer data in AI tools” without distinguishing between consumer-grade tools and enterprise-contracted tools creates a false binary that kills productivity and drives workarounds.
The right framework has four tiers. Tier one is public or anonymized internal data — this can be used in any approved tool. Tier two is internal confidential data (internal strategy documents, financial forecasts, internal communications) — permitted only in tools with enterprise data processing agreements where your data is not used for model training. Tier three is customer data — permitted only in tools specifically contracted for that purpose, with documented data residency, retention limits, and breach notification terms. Tier four is regulated data (PHI, PII, financial account data, legal privilege) — requires explicit legal review before any AI processing, regardless of tool.
Redaction is not a sufficient substitute for tier discipline. Asking employees to manually redact names and account numbers before pasting is an error-prone process that will fail at scale. The better control is tool selection: use AI tools that connect to your systems of record via API with role-based access controls, so employees never need to paste anything. When the data stays in the system and the AI comes to it, you eliminate the paste problem entirely.
Approved tool lists need maintenance schedules, not just initial publication. A tool that was safe to use six months ago may have changed its data processing terms, been acquired, or expanded its data retention scope. Review your approved AI tool list quarterly. Require vendors to notify you of material changes to their data processing agreements. Add this review to your standard vendor management cycle.
The audit requirement is non-negotiable. Any AI tool that processes customer data should produce logs showing what data was submitted, by whom, when, and what output was generated. Without that audit trail, you cannot respond to a data subject access request, cannot demonstrate compliance to a regulator, and cannot reconstruct what happened after an incident. If a vendor cannot provide this, the tool is not compliant for customer data use regardless of what the contract says.
Related Reading
- What AI red lines should we set today? — the non-negotiable boundaries every organization needs
- AI Board Governance Scorecard — assess data governance and policy controls across your AI program










