09/19/2025 | News release | Distributed by Public on 09/19/2025 10:32
Microsoft 365 (M365) is a Microsoft product suite of productivity, collaboration and security tools designed to support enterprises, small businesses and individual users. This suite includes cloud-based applications such as Word, Excel, PowerPoint, Outlook, Teams, SharePoint and OneDrive under a unified platform. Depending on the license level of Microsoft 365 it also includes Microsoft Purview for data compliance and Microsoft Defender for security monitoring.
M365 Copilot is an AI-powered productivity assistant integrated into the Microsoft 365 suite, designed to streamline and enhance work within everyday business applications like Word, Excel, PowerPoint, Outlook, Teams and more. By Leveraging Large Language Models with Microsoft Graph, Copilot provides real-time content generation, intelligent drafting, automated summarization, advanced search, data-driven insights and personalized recommendations. As seen in the screenshot below, the Copilot integration with Microsoft Outlook , can be used to summarize emails and attachments.
M365 Copilot goes beyond simple task automation; it understands organizational context by accessing emails, chats, files, and business data (while respecting access controls) to deliver tailored responses and suggestions. It also allows organizations to create custom agents for process-specific queries, automation, and amplifying efficiency across departments.
With such access, Microsoft Copilot introduces several risks that organizations must address to ensure data security, privacy, regulation and compliance. One of the risks is over-permissioning where Copilot inherits user access to all Microsoft 365 content, so if permissions are too broad or poorly managed, AI can potentially surface and expose confidential or regulated data such as intellectual property, financial records or personal information. Copilot integration with several Microsoft 365 services creates a new attack surface as vulnerabilities in those services could extend to AI and vice versa.
One of those risks involves AI specific threats and compliance. Copilot can be susceptible to prompt injection attacks where adversaries manipulate prompt inputs to access, exfiltrate or socially engineer sensitive data, bypassing intended controls. An example of this has been highlighted in the exploitation of Copilot related technologies such as Copilot Studio in which agents have inadvertently shared sensitive data resulting in regulatory violation, exfiltration and unauthorized disclosure.
Fortunately, there are ways to implement monitoring and implementing security measures in Microsoft 365 Copilot. Some of those measures include:
In this blog we are going to focus on Audit Logs and Office365 Activity Logs using Splunk Add-on for Microsoft Office 365. This Splunk Add-on allows Splunk to pull service status, service messages and management activity logs from Office 365 Management API. From there, we will get the metadata for Copilot usage from Desktop Apps or Browser.
Then we are going to look at prompt logs collected from Purview eDiscovery. Microsoft Purview eDiscovery is a tool that helps organizations identify, preserve, collect, review, analyze, and export electronic data across Microsoft 365 to support legal and compliance investigations.
It is important to understand that there are controls in place for Prompt and Response monitoring. Microsoft has deliberately designed the Microsoft 365 Unified Audit Log to record metadata only, not the actual Copilot prompt or response text. This is by design, for privacy and security reasons, exactly how the audit log does not capture the subject line or body of an email, or the contents of a Teams chat.
Privacy: User prompts can contain sensitive, confidential or personal information. Exposing this data in a general audit log accessible by administrators would be a major privacy violation.
Security: If an administrator's account were ever compromised an attacker could potentially read the audit logs to harvest vast amounts of sensitive company data, intellectual property and user intentions.
This is what you can get from Audit logs, once we are able to ingest them in Splunk.
What the log includes:
By looking at this example log we can get the following information:
Successful SSO with MFA: The sign-in was successful and properly leveraged a previous MFA authentication, which is a good sign of a well-functioning SSO environment.
Unmanaged Device Access: The user accessed a sensitive application (Copilot) from a device that is not managed or compliant. This is a significant security observation. Your organization may want to consider creating a Conditional Access policy to block access from unmanaged/non-compliant devices or to require MFA for every sign-in from such devices.
No Conditional Access Applied: The fact that no CA policy was applied, especially given the unmanaged device, is a point of interest. An administrator should review if this is intentional or if a policy needs to be adjusted to cover this scenario.
The above provides information from Copilot interaction however as it can be seen it does not contain prompt information.
Here are two more examples of Audit logs related to Copilot interactions:
Here are some items we can take from the metadata of this interaction with Microsoft Copilot:
We can now do some basic analytics with this metadata. In the following search we can see a distribution of origin of access, conditional status and reasons for the conditional status. conditionalAccessStatus is notApplied is a strong indicator that there may be a gap in the organization's security policies. An administrator should investigate why policies are not being triggered, especially for access from non-compliant and unmanaged devices. Are there users interacting with Copilot with non-managed or compliant Devices? Are there discrepancies in the origins of users and devices interacting with Copilot? Notice that some additional details are very verbose.
Here is another SPL search analytic that shows origin of devices, user principal name, browser name and version, resource display name and application name. This can give you a good picture of how Copilot is being accessed, from where, via which app and from which region.
In the following search we add _time to complete the picture of devices, region, user, application and time of usage.
Here is another analytic to detect session origin anomalies:
And now that we have looked at the access metadata let's take a look at what we can get from the actual prompt logs. As it was stated before the best way to get these logs is via Purview eDiscovery which requires specific privileges and access and conditions in order to get this information.
Here are some examples of the prompt content you can obtain from eDiscovery while interacting with M365 applications. You have to create a search first that targets AI interactions with Copilot then they can be retrieved and exported. Here is how they look:
In this very detailed export, you can get everything about the prompt related information while interacting M365 here is a sample of the actual prompts executed and how they were recorded.
You can even export them individually and see prompt and response like this:
You can get very detailed pictures of these prompts, but it requires several steps to set up and execute. The following are some examples of these logs exported to Splunk. Notice how detailed the interactions are broken down by specific application, user and actual prompt content value in the Subject_Title field:
In the following query, we can see specifically what was the conversation and files that were created because of the prompt interaction:
Now that we established we can obtain this prompt content in detail, let's use some security prompt testing and see what we get back from Copilot, we will then search in Splunk for possible LLM attacks focusing on the following categories:
Jailbreaking: Bypassing safety constraints of LLM
Poisoning: Promoting dangerous, hateful, or illegal content
Illicit Impersonation: Taking on harmful, illegal, unethical, or explicit personas
Info Extraction: Attempting to elicit maximum "knowledge," steps, or details, often with "tell me everything," "explain step-by-step," or "hidden/information resource"
Agentic Attacks: Enabling multi-turn, reward cycles, memory, persistent or escalating prompt chains
Here is a comparison table of the prompts that were selected:
I selected a few prompts that would showcase the category of attack from this list https://github.com/JailbreakBench/jailbreakbench.
Now let's see some Splunk detections for these prompt attacks.
Jailbreak
Role playing / Impersonation attack
Model Poisoning
Agentic Attack
Information extraction
Using Splunk with Microsoft Copilot can be very powerful to monitor AI interactions and possible attacks. These exports are quite verbose and extensive, using Splunk is the way to manage the extensiveness and detail of these exports to make sense of the interactions between users and Microsoft Copilot.
These interactions are another item in the attack surface and current massive usage, and deployment of this type of technologies are going unsupervised, unmonitored and most of the security frameworks targeting these technologies are still in development. It is fundamental to obtain these interactions and monitor them as we have seen in this blog. It is possible to exploit AI with significant risk to organizations' data confidentiality, integrity and availability. This blog has shown how to approach these interactions by looking at the metadata and further on the actual detailed content of prompts.