Microsoft blocks sensitive data from Copilot prompts with new tool

Microsoft has made its Purview Data Loss Prevention tool generally available for Microsoft 365 Copilot and Copilot Chat. The system scans user prompts in real time for sensitive information types, such as personal data or financial details, using both built-in and custom definitions. If it detects risks, it stops the AI from processing the prompt, querying Microsoft Graph, or searching the web. The protection also covers agents created in Copilot Studio and published to M365 Copilot. Administrators must set it up in the Purview portal or admin center, starting in simulation mode to monitor without blocking. A public preview adds prevention of web searches triggered by sensitive prompts.
Before this, many mid-sized companies paused Copilot rollouts over fears that prompts could accidentally expose confidential files through Graph or leak PII via web grounding. Users like team leads stuck to manual work or sneaked over to ChatGPT, leaving adoption below 10 percent despite licences bought. Now configurable DLP acts as a precise guardrail, blocking only risky prompts while letting safe ones run. This shifts the balance in firms without dedicated compliance teams, turning governance from a total blocker into a quick IT checkbox that unlocks reliable daily use across Word, Teams, and agents.
Analysis
This governance fix nukes the top excuse for Copilot's low adoption in your shop – no more 'what if it leaks our data' handwringing. Grab admin access to Purview today, flip simulation mode on for credit card detection, test a fake-sensitive prompt in Copilot Chat, then email your boss the log proving it's safe to mandate team trials.
Citation
This executive briefing was curated and analyzed by Collab365. To reference this analysis, please attribute: "This briefing is available on Collab365 Spaces (spaces.collab365.com)".