Digital Defense Month

Once again, October brings the happy news that Microsoft has a new edition of their annual Digital Defense Report ready to read. Although, sadly, this is not yet a formal holiday, I remain hopeful that it will get there one day. Microsoft has a webcast coming up on October 30th where they will discuss some of the findings, and afterward, they will produce several different summaries targeted at CISOs and other roles. Instead of waiting, though, I wanted to highlight a few of the noteworthy things about this year’s report and contrast it to what I said about the 2023 report. Grab your bowl of Halloween candy and settle in.

Data Sources and Methods

In the intelligence world, “sources and methods” means exactly what the name implies; it refers to where the data comes from and how it’s gathered and processed.  It’s interesting to look at where Microsoft gets the data that they analyze to produce a report. While many security companies produce similar reports, Microsoft’s huge portfolio of public cloud services (including Azure, Entra ID, Microsoft 365, Dynamics 365, Xbox, consumer Windows, and various others) gives them a unique view, with more breadth, depth, and resolution than any other single vendor.

The MDDR incorporates data from all these services, along with telemetry from the Defender ecosystem, information gathered during incident investigations and remediations for both Microsoft and customers, and data collected while addressing vulnerabilities in their software and services. Microsoft says they have collected more than 78 trillion signals (up about 20% over last year’s number), which is a testament both to the growth in usage of their services, but also in their ability to vacuum up huge amounts of data, process it, and extract meaning from it.

A Few Highs (and Lows)

Of course, Microsoft has to be very selective in the numbers they highlight in the MDDR. There’s too much data for them to just give it all to us, but the editorial choices in what they highlight are pretty interesting. Here are a few of the things that I thought were most noteworthy:

  • 389 healthcare institutions in the US were hit by ransomware. I suspect the actual number is higher since there are undoubtedly some smaller doctors’ offices, hospitals, and clinics that are too small to register on Microsoft’s radar. The old-school gentlemen’s agreement that ransomware actors wouldn’t attack healthcare providers is obviously long gone.
  • Microsoft’s seeing more than 600 million identity attacks per day. While last year Microsoft said they blocked about 6000 password-based attacks per second, now they’re up to blocking more than 7000 attacks/second. This is a worrisome increase, especially when you consider that MFA is slowly becoming more widely adopted.
  • Speaking of MFA: Microsoft says 41% of their enterprise-customer users are now protected by MFA. In 2014, when Microsoft first offered MFA, adoption was a minuscule 0.7%, and in 2022 it was 37%. The trend’s moving in the right direction, but there are still too many administrators using non-phishing-resistant MFA methods such as SMS.
  • “Human-operated” ransomware attacks nearly tripled. Endpoint detection and edge blocking of malware are much less effective when a malicious human insider can plant the ransomware.
  • The US and Israel each face just over twice as many nation-state attacks as Ukraine, which itself faces nearly double the number of attacks that Taiwan sees. These countries are the most frequently attacked in their respective geographic regions. This isn’t surprising, but it is interesting in the context of what those countries are doing in response (both publicly and covertly).

Apart from the numbers, Microsoft also made a fascinating editorial choice by including a section called “Early insights: AI’s impact on cybersecurity.” Much of what we read about AI in the cloud software world is just hot air, but Microsoft has highlighted a few areas where AI is making (or is reasonably expected to make) a real difference. One obvious one is that AI tools help attackers craft more believable phishing and spear-phishing campaigns; another is that AI-enabled deepfakes pose a significant risk for both attacks but also for influence operations. There are other risks that perhaps you haven’t considered, like automated generation of command-and-control infrastructure for malware; it’s worth reading this section as a roadmap to potential threats that may emerge in the future as much as for its descriptions of the technology involved. The good news is that AI is also useful for defenders; one example Microsoft gives is training AI models on “endpoint stories” (signals gathered from Defender for Endpoint) that represent normal behavior so that the AI models can flag anomalous behavior.

Things You Should be Doing

Microsoft has tons of recommendations scattered throughout the report, including many that will mostly be applicable to specific industries or geographies. I think the below five items are a good representation of the general steps you should be taking to protect against the emerging and current threats that Microsoft talks about.

First, take a look at the pyramid on page 60. It’s a cybersecurity-themed version of Maslow’s hierarchy of needs. The base of the pyramid is labeled “protect identities.” If you’re not confident that your user, service, and machine identities are well protected against credential theft, token stealing, and other attacks, this is the first and most important place to focus your effort.

Second, as part of protecting your identities: if you already have deployed MFA, move on to deploying phishing-resistant MFA. If you haven’t, get busy.

Third, consider expanding (or beginning) your use of risk-based conditional access policies to require additional authentication for suspicious requests. This will help reduce the possibility of token theft and account takeover. 

Fourth, Microsoft recommends that every organization focus on doing three things before deploying Copilot: labeling and classifying data, implementing access controls to keep nosy AI systems from seeing things, and educating users on how data classification and protection tools work. This is excellent advice, which you can implement at a low cost. Even if you don’t see a role in the broad use of Copilot, data classification is a very useful way to identify high-value data and get rid of old or redundant data.

Finally, take a look at the graph on page 48. It shows the average preparedness against cloud identity attacks organized by the industry sector. I won’t reproduce it here, but you should absolutely take a look at it and see how you think your preparedness compares to others in your industry. If your benchmark is lower than your peers, put some effort into figuring out why and then improving it.

About the Author

Paul Robichaux

Paul Robichaux, an Office Apps and Services MVP since 2002, works as the senior director of product management at Keepit, spending his time helping to make awesome data protection solutions for the multi-cloud world we’re all living in. Paul's unique background includes stints writing Space Shuttle payload software in FORTRAN, developing cryptographic software for the US National Security Agency, helping giant companies deploy Office 365 to their worldwide users, and writing about and presenting on Microsoft’s software and server products. Paul’s an avid (but slow) triathlete, an instrument-rated private pilot, and an occasional blogger (at http://www.paulrobichaux.com) and Tweeter (@paulrobichaux).

Leave a Reply