AI tools offer ease of use and an increasingly rapid pace of work for users, and thus pose two significant problems to companies. For example, when users type prompts, copy-paste
documents, or ask an AI to summarize an alphanumeric code, they could affect sensitive data that the company does not know how it was created, who created it, or when it was created (creating an unaccounted-for “blind” spot).
Therefore, creating “blind” spots ultimately exposes users’ and companies’ private information (inadvertently), as well as allowing those who attempt to harm or disrupt businesses to do so without detection.
The Specific Risks That Come from Having No Visibility
There is limited visibility for security personnel into how the tools are being used or what data is being processed, unless they collect and review audit logs, activity signals, and related records from Microsoft Purview and the Copilot audit trail.
1. Sensitive data gets fed into tools with no data retention guarantees
Employees typically do not read the terms and conditions associated with the AI applications they use. Many AI programs will keep a history of the prompts used by their users, and more than just maintain that history, use that information to help train and improve their models. Without visibility into how the tool is being used, your security team may not know how much of your data has been exposed to an external AI service provider, especially if users paste sensitive content into tools that keep prompts, responses, or other interaction data.
2. You cannot distinguish between accidental exposure and intentional exfiltration
An employee departing from the organization that has used the AI application to extract and summarize key client data before he or she left the company will appear to be as legitimate as the employee who is extracting a summary of the key client data for attending a legitimate meeting.
Without visibility into the context and history of the activity, it can be hard to tell which of the two scenarios is taking place. Security teams can narrow that gap by tracking unusual access patterns, user behavior, and AI-related activity with the right tools.
3. Privileged users become a higher risk
The use of AI tools adds complexity to data security because security teams may have only limited visibility into how the tools are being used or what data is being processed. Microsoft 365 Copilot and Microsoft Purview do provide audit logs, activity signals, and related records, but those controls still need to be collected, reviewed, and tied to user behavior before they become useful in an investigation.
4. Compliance obligations become impossible to meet
Organizations must be able to identify where personal or regulated information is shared or processed to comply with regulations, e.g., GDPR/HIPAA. If teams provide personal or regulated data externally to AI tools without Security being aware, meeting compliance will be a guessing game.
What kind of AI activity should security teams watch?
Security teams need to monitor the portions of how AI is utilized that include data. Examples include prompt generation, output generation, file access related to AI usage, agent activity, connector usage, and export of data from an AI tool to email or chat, or a storage application. Microsoft states that data from interaction with Copilot includes prompt and output data, and administrators can control which agents are permitted access in the organization. As a result, AI usage is something that you can govern rather than being an expectation of being secure.
The more relevant question than “Is anyone using AI?” is, “What data is being input into AI, what output data is being produced by AI, and who will be accessing it next?” This is the interaction point where insider risk is present. Data loss prevention, risk visibility, and sensitivity labeling are the primary controls that can provide visibility into and stop the output of risky or potentially dangerous data across applications and services. Microsoft Purview’s security guidance for Copilot is founded upon these three controls, therefore providing the ability to improve governance programs for AI.
How do weak controls make AI risk worse?
Weak controls usually fail in the same few ways. A company turns on Copilot or another AI tool before cleaning up old permissions. Sensitive files remain open to too many people. No one classifies the data properly. Logging is fragmented. Then, when someone uses AI to search, rewrite, or summarize a file, the tool can surface content that was never meant to be so easy to reach. Lepide’s own Copilot security page warns that in permission-heavy environments, sensitive HR, finance, or legal data can surface through a simple prompt.
Another area of weakness in your controls that needs to be understood is how agents and connectors are used with your system. Microsoft states that Copilot uses agents and Graph connectors, while administrators also restrict what agents they allow through the system. Every connector is an additional path to data, and then each path also requires that you have proper rules, reviews, and logs in place. If you consider agents to be just an extra feature, you are then transferring the risk for a critical part of the security infrastructure to an area that most employees will simply click through without paying attention. As has been proven time and time again, this scenario rarely ends well; a prime example would be when someone leaves the keys to the office in the door.
What should a good AI visibility setup include?
A good setup starts with four things. First, it must log prompts and responses. Second, it must tie those actions to users. Third, it must show which files were touched. Fourth, it must let security teams act when a prompt or reply looks risky. Microsoft Purview’s guidance around Copilot security, audit, and insider risk points in this direction: visibility first, then control, then response.
It should also help teams tell the difference between normal use and risky use. A marketing user asking Copilot to rewrite a public blog draft is not the same as a finance user asking for a summary of payroll files. Good visibility is not about watching everything with the same level of panic. It is about seeing enough to know what matters. That is what turns AI from a blind spot into a managed tool.
How can teams reduce insider risk without slowing work down?
The goal is not to stop AI. The goal is to stop careless or harmful use. Microsoft’s security guidance for Copilot points to the right mix: risk visibility, sensitivity protection, DLP, audit, and governance. Those controls let teams keep the speed gain from AI while still watching for oversharing, odd access, and bad prompts.
A well-thought-out implementation of AI should accomplish three things effectively. It should restrict the ease with which sensitive data can be pasted into the wrong location; show when sensitive data is accessed by AI; and enable rapid response to any anomalies. The Microsoft insider risk policy also identifies that these types of controls need to include “privacy by design,” role-based access control, and audit logs, which will ensure that those investigating a possible insider risk are not creating an additional issue while trying to resolve the first issue. This represents a rare instance of common sense within the overall confusion surrounding the topic.
How does Lepide help?
Lepide’s Microsoft 365 Copilot Security solution is built to give teams clearer visibility into Copilot use. It provides complete visibility into Copilot permissions, usage, and even surfaces whenever sensitive data is being accessed through high-risk prompts. It also enables you to drill into conversations to see which prompts led to sensitive file access, which is exactly the kind of trace security teams need when they are trying to figure out whether something was a mistake or a threat.
It also gives teams a Copilot security dashboard, access monitoring, and real-time alerts for risky natural-language queries. It can show who is using Copilot, how often they use it, and what sensitive files were accessed, while also helping with classification, access control, auditing, and threat detection across Microsoft 365. That makes it useful for both day-to-day control and incident review.
Conclusion
Insider risk existed long before AI, but its introduction has only accelerated the speed, silenced the noise level, and increased the difficulty of identifying insider risks. When an individual uses AI tools, the way that data, whether confidential or classified, flows is not easily identifiable by traditional control mechanisms. If you cannot see the prompts, responses, or the data underlying them, you are not effectively managing risk; you are simply guessing.
The solution to this problem is fairly simple. You need to make AI use visible, associate it with the user of the system, and associate it with the data that they are using (or creating). Once you implement those actions correctly, AI will remain a productivity tool; if you do not do those things correctly, you are likely creating your largest blind spot.
To see how organizations are gaining visibility into Copilot activity and reducing insider risk with Lepide, request a demo or download the free trial today.
