New Copilot Coming April 1
It seems like every second or third word emanating from Microsoft these days is “Copilot.” There’s Copilot for Microsoft 365, of course, plus Copilot for Sales, Copilot in Outlook (and Excel, and Word, and PowerPoint); Copilot Studio; for all I know, Copilot for Xbox may be waiting in the wings. What all these products share in common is that they’re based on the same basic generative AI technology, which Microsoft licenses from OpenAI. The OpenAI-powered large language models (LLMs) that Microsoft is using are used for generating content, of course, but they can also be used for summarization and pattern detection.
In the various Copilot offerings, Microsoft is combining generation (such as Outlook’s “Draft with Copilot” feature), natural language commanding (such as giving plain-English instructions to Copilot for Excel to apply conditional formatting), and summarization (such as the Teams Premium feature that can generate a summary of a meeting from its transcript). Copilot for Security (which is sometimes called “Security Copilot” in various Microsoft materials) claims to help improve your security by combining commanding and summarization with some additional analytics features.
Now that Microsoft has announced that Copilot for Security will become commercially available on April 1, what does that mean, and what will it actually help you do? Let’s dig in.
What Copilot for Security Does
The basic idea behind Security Copilot is that it ingests signals from other Microsoft security products (including Defender XDR, Defender for Office 365, Sentinel, and Entra ID) and then provides various types of tooling and analytics for working with them. Their demos have focused heavily on scenarios around incident detection and response. For example, this demo video starts with a natural-language query asking for a summary of an incident (for example, “Show me the top 5 device compliance alerts that I should prioritize today”). These summaries combine data from various sources to provide a clear summary of what happened, what the scope of the incident was, and so on. The incident summary in the demo also ties to Defender Threat Intelligence to highlight the threat actors that were likely involved, then shows off Security Copilot’s ability to give additional detail around the threat and threat actors identified.
The value behind this use case isn’t providing the information—the Sentinel and Defender logs, the contents of Defender Threat Intelligence, and other data sources are already directly available. However, Microsoft makes two strong claims that I think are correct. First, Copilot for Security can collate, analyze, and summarize that data faster than most human analysts can. Decades of security experience tells us that humans are really bad at monitoring or parsing logs or paying sustained attention to items that must be monitored. Second, Microsoft claims that the Copilot-based technologies can provide a uniform, and high, level of knowledge and expertise—based on Microsoft’s own security data—to every user, not just to super-experienced security analysts.
Incident response isn’t the only use case, though. Microsoft highlights other summarization-based features, such as the ability to summarize a set of device access policies from Intune, or explain why a specific conversation was flagged by Communications Compliance. Many of these features are integrated directly into their host products; for example, the Communications Compliance dashboard has a Copilot for Security chat interface embedded into it so you can ask questions or start actions directly from that application’s context. When you have an active Copilot for Security subscription, these embedded features are supposed to just magically appear and be usable.
Early Signs of Improvement
At the same time Microsoft announced a GA date for Copilot for Security, they announced five new “skills” based on Entra ID data. Each skill is a combination of a data source with vocabulary and extension to help you make use of it. For example, Microsoft says this about the Sign-in logs skill:
Sign-in logs can highlight information about sign-in logs and conditional access policies applied to your tenant to assist with identity investigations and troubleshooting. Admins must simply instruct their Copilot to “show me recent sign-ins for this user”, “show me the sign-ins from this IP address”, or “show me the failed sign-ins for this user.”
Microsoft hasn’t said anything publicly about how skills work, but it seems reasonable to expect that they’ll add additional skills based on data sources from Microsoft’s own security offerings. In addition, they have announced skill support from SGNL, Cyware, Tanium, Valence, and Netskope.
An Unusual Approach to Pricing
Products based on generative AI can be difficult to price. That’s because a single output (such as an image or a page of text) may have a significant cost to produce, not to mention the cost of the specialized infrastructure required to power the generative AI models in the first place. Microsoft chose to license Microsoft 365 Copilot at a flat rate of US$30/user/month, arguing that every user would see more than $30/month of productivity benefit. For Copilot for Security, they’re taking a different tack.
For fun, I asked Copilot how much Copilot for Security costs. The answer didn’t inspire much confidence. On the other hand, this was an answer from the free version of Copilot (figure 1).
Thankfully we know a more precise answer: Microsoft’s announced pricing is per hour. More precisely, they’re using a pay-as-you-go (PAYG) model where you get a monthly bill based on the number of Security Compute Units (SCUs) you consume. SCUs are priced at US$4/hour. There isn’t any published guidance on exactly how SCU usage is computed, but more complex queries, or actions taken over larger data sets, will obviously burn more SCUs than simple or limited actions. Microsoft touts this model as being more flexible and scalable than flat-rate pricing, but at the same time, it seems to offer the possibility of nasty pricing surprises if you burn a bunch of SCUs and don’t realize it until the bill arrives. You’ll have to specify the number of SCUs you want to be provisioned; Directions on Microsoft says that they were told customers should start by provisioning 3 SCUs/hour and then adjusting as necessary. We will have to wait and see what happens if you don’t provision a sufficient quantity of SCUs to do whatever tasks you’re asking Copilot to perform.
What Comes Next
We’ll know a lot more in a few weeks when Microsoft formally launches the product and everyone interested in it can get their hands on it for actual testing. It certainly looks promising, but in this case, a saying popular in the Copilot for Excel team comes to mind: “99% correct is 100% wrong.” That is, Copilot for Security must consistently produce reproducible, accurate, complete results in order to be useful, and Microsoft has a high bar to meet to ensure that they deliver the capabilities they’ve promised across the trillions of signals that the product will have access to.