A Pilot’s Thoughts on Copilot
As a pilot, when I see terms like “copilot” and “autopilot” I expect them to mean certain things. Microsoft has done a pretty good job of meeting my expectations with their Copilot-branded products, meaning both “I expected them to brand things confusingly” (they did) and “a copilot assists you with tasks but requires supervision and training” (it does). Because this column isn’t called Practical Marketing, I’ll save my thoughts about the Copilot branding mess for bar discussions at ESPC or the MVP Summit and focus instead on the “supervision and training” aspects of Copilot deployment, with a security lean.
You might be excused for thinking that you don’t have to be concerned about Copilot given the bad press Microsoft has generated around pricing and licensing… and that may be true for Microsoft 365 Copilot. However, Microsoft is carpet-bombing us with so many other Copilot solutions that you’ll probably end up with some of them in your tenant estate whether or not you buy the Microsoft 365 Copilot licenses. Then, of course, there are other vendors using “Copilot” or “copilot” in the names of their own products, such as Salesforce’s Einstein Copilot. So I’m just going to use “copilot” or “copilot-ish” to describe any tool based on large language model (LLM)-based generative AIs, whether or not Microsoft makes them and whatever their formal name is.
Ready or Not, Here It Comes
What’s the first thing you should do? That’s easy. You should decide, enact, and broadcast whatever your organizational policy is around using these tools. This is a little more complicated than it might sound at first because not every copilot tool works the same way. Microsoft has done a pretty good of describing the architecture behind Microsoft 365 Copilot, including its use of two sets of models (one generic set trained by Microsoft on public data, and one tenant-specific set trained on data from, and only usable within, your tenant). It may be more helpful to think of copilots as existing in two categories: those that use your organizational or personal data and those that do not.
For example, Windows Copilot can help you with Windows configuration tasks, and it incorporates research and summarization features from Bing, but it can’t summarize your Teams meeting transcripts for you. On the other hand, Copilot for Word will happily read, summarize, edit, or rewrite your organization’s Word documents if you give it access.
Your policy should outline who’s allowed to use copilot-based tools, what tasks may and may not be given to copilots, and what restrictions you want to apply. In some senses, this policy is analogous to having an Internet-access policy: both types of policy exist not to prevent people from doing bad stuff but to educate them before doing bad stuff and to give them recourse if they do bad stuff despite the policy.
You might argue that writing a policy isn’t the first thing you should do. That’s a fair assertion, but since you can’t actually buy and/or deploy most of the announced Copilot products yet, now’s a great time to get out in front by putting down some ground rules for your users.
Deployment and Adoption
Much and more has been written about planning for the adoption of Microsoft’s various copilots. I won’t try to repeat it here, but if the topic’s new to you, start with Tony’s guide to deciding who needs paid Microsoft 365 Copilot licenses. Instead, I want to focus on the practical security aspects of these deployments.
Probably the most important thing to remember is that Copilot services are like vampires: you must invite them in. Like Office Delve, copilots will only work with data they have permission to access. This is a common enough question that it’s worth raising here. If you don’t feel like you have good visibility or control over information sharing, data classification, and DLP in your existing environment, it would be a really good idea for you to get that effortfully underway before you start buying Copilot licenses for your users. For example, if you don’t want users applying Microsoft 365 Copilot to certain types of documents, you can apply sensitivity labels to block user access.
Depending on how you plan to use Copilot-based tools, you may decide you want to audit their use, which Microsoft has thoughtfully provided for. Each of the copilot-based tools has some means to control whether users can access it or not. This usually requires you to assign licenses or service plans to users, so you may want to consider how you’ll handle user requests (and if you haven’t already, you could consider turning off user self-service purchases, so that when Microsoft starts allowing users to buy their own Microsoft 365 Copilot licenses you’ll be ahead of the game).
Copilots and Security
I’m probably more intrigued by the possibilities of Security Copilot than any of Microsoft’s other offerings. One of the biggest problems we face in the security world is making sense of the flood of logs and signals that modern security tools can capture, and a copilot that can review and synthesize this flood of data and highlight important parts of it seems like exactly the kind of thing a security practitioner would want. I’ll be writing more about how to practically apply Security Copilot, and its siblings, as Microsoft makes them more widely available. Normally I’d close my column with an encouragement to go out and play with the tools to learn them better; since they’re not really widely available yet, the best I can do is to tell you to start thinking about how you might use them, and reviewing Microsoft’s growing library of documentation, to get ready for their inevitable arrival.