I’ve seen organisations that:
– Block Copilot entirely (whilst staff use ChatGPT on personal accounts)
– Prohibit pasting customer names into prompts (but allow full document uploads)
– Require VPN access for AI tools (that store data in UK/EU cloud regions anyway)
This is security theatre- visible measures that create a feeling of safety without addressing actual risks.
However, what AI security actually requires is:
1. Data classification, not blanket bans. Not all data is equal. Publicly available information carries different risk than PII or commercially sensitive data. Classify it, then permit AI use accordingly.
2. Contractual controls with vendors. Where is data processed? How long is it retained? Is it used for model training? These questions matter more than whether you can access the tool through SSO.
3. Technical guardrails, not just policies. Policies get ignored when they’re inconvenient. Better: configure Azure AI Studio to block PII in prompts. Use data loss prevention tools. Deploy on-premises models for sensitive use cases.
4. Zero data retention agreements where possible. Microsoft offers ZDR for Copilot. OpenAI offers enterprise agreements with no training on your data. Anthropic’s Claude supports similar terms. Use them.
The threat model you should be worrying about:
– Prompt injection attacks (malicious instructions in user-supplied content)
– Data exfiltration through carefully crafted queries
– Model inversion (extracting training data)
– Unauthorised access to AI-generated insights
– Legitimate use of enterprise AI tools, properly configured, is usually lower risk than unmanaged shadow AI.
What’s your organisation’s approach to AI security?
