Blog Post

Takeaways from the CrowdStrike outage: third-parties can pose risk

In terms of the CrowdStrike outage, we all know there is no easy fix. Nonetheless, we’ve put together some clear takeaways.

“You can’t start a fire without a spark,” so sings The Boss. You can’t run an organization these days without digital dependencies. Nor – or perhaps because of this - can we outrun digital failure.

Whether it’s on the global scale of CrowdStrike crashing 8.5 million devices - and with it bringing a halt to entire industries - or a smaller yet still costly issue like the cart stopping working on your eCommerce site thanks to your cloud provider going down (read our perspective on both of Friday’s outages here).

Our Head of Operations issued internal teams a reminder today that there is no easy fix to these kinds of issues, such as simply deciding to remove single points of failure. “True multi-vendor always sounds easier than it actually is,” he counseled – whether that’s running Linux and Windows for the same functions or attempting to split endpoint security between two different vendors… Even less easy are the issues being hotly discussed in the news around concentration and consolidation of large tech solutions.  

One thing that is painfully clear and an easy cause for agreement, however, is that this will have caused sleepless nights among many, many IT departments and those teams merit our thanks!

Major IT Outage hits banks, airlines, businesses worldwide
BSOD screens at an airport in New Delhi, India (Photo by Amarjeet Kumar Singh/Anadolu via Getty Images)

Still in progress… 5 takeaways from the CrowdStrike outage

So while the lessons from the CrowdStrike outage are unfolding in real time as recovery efforts, which could take weeks, continue; and yes, both CrowdStrike and Microsoft have quickly put out guidance for recovery… There are nonetheless several general takeaways already evident. Some of ours here:

#1 - Everything is digital

As a result of CrowdStrike’s faulty software update, airlines were grounded (some still are), and banks, schools, governments, and businesses around the world have all been impacted. Furthermore, when societal functions as important as healthcare and the emergency services are underpinned by digital systems – and fail because of it - the need for digital resilience is made startlingly clear.  

#2 - We are all digitally interdependent (and can’t escape failure)

In Catchpoint’s Internet Resilience Report 2024, we interviewed over 300 global digital leaders about digital and Internet Resilience. One of the questions we put to the field was about their reliance on third-party providers. All respondents, bar 1%, said they had some reliance on third-party platform technology providers and 77% said this was extremely or highly critical to their digital or Internet Resilience success.

We can’t remove these dependencies. They are so multiple and so intertwined to enable our sites and applications to function and our machines to stay secure, etc. And we also know they will fail at some point since, as Werner Vogels, CTO of AWS infamously stated, “Everything fails all the time.” However, your IT teams can chart your dependencies and monitor them as thoroughly as you can. Catchpoint’s Internet Stack Map can help you do that for your applications and services.

It can be seemingly the smallest of issues that cause big failures. This happened to us a few years ago with an unforeseen issue caused by an update to our Let’s Encrypt certificates. It could be as easily a BGP, CDN or DNS issue you may be unaware of.

#3 - Be prepared (as best you can) for failure

If you know that a key provider is planning a major system update in a week or two’s time, you should aim to ensure that you will be able to stay resilient and bounce back quickly if there are problems. To do this, ready your teams with:

  1. A crisis call plan (who will be on the call, what steps should likely be taken, who to be contacted with specific issues, etc.)
  2. A clear understanding of the consequences of any failure
  3. A monitoring and observability plan that covers all bases
  4. A communications plan and easy-to-populate templates for how to share difficult information with customers, users and the wider public
  5. A process in place for effective postmortems
“There will be a lot of conversations around "trust but verify" on allowing auto-updating of tools and OSes. I know security teams would love it, but in terms of preventing this type of outage in particular, a staged rollout with verification before continuing should be the modus operandi goal.”

Tony Ferrelli, VP, Operations, Catchpoint

#4 - Test, baby, test… and monitor continuously!

When making software updates or change configurations, test extensively on a variety of systems before deployment. Testing needs to be implemented in testing environments that emulate real-world scenarios - to include older systems that might be in use by clients. Actually, in the unconfirmed instance of Southwest whose planes have stayed aloft, unaffected by the global outage, it may be because they’re using Windows 3.1. Although, this equally may be a joke... especially considering Windows 3.1 is from 1992.  

At any rate, the outage highlights a painful gap in CrowdStrike’s testing and validation processes.  

Put in place monitoring tools that can help detect issues early. Test before, during and after deployment and monitor what happens so that you have clear benchmarks to understand exactly what is happening. Robust monitoring tools help detect issues early and crucially reduce MTTR, bringing down the cost to systems, people and business. Catchpoint solutions support this process across the DevOps lifecycle.

Don’t forget to run experiments. After you’ve QA’ed everything you can think of, one practice that is extremely helpful is to do a slow push in production. Can you push the change to <1% of your fleet? Then slowly ramp up to 10% then 50% then 100%? Make sure to put error checking and validation in place to see how well your experiment is doing.  

#5 - Organizations must prioritize resilience

As organizations increasingly depend on complex, interdependent IT systems, it is essential that they prioritize resilience. A plan for digital resilience at all levels of the org is essential that ensures it is actually implemented.  

One way of doing this for large and medium-sized orgs that we’ve written about elsewhere is to implement a Chief Resilience Officer, or CRO. Another for smaller orgs to consider is to develop a team inside your company responsible for resilience and alongside this, institute a C suite sponsor. Unless resilience is placed at an executive level of importance and a continuous focus applied, the risk of a poorly run change and incident management process will worsen. Make the effort now (before failure happens) to ensure that resilience is embedded into every aspect of your organization. The goal: to make sure that your organization can quickly recover from disruptions and withstand them, safeguarding its reputation, the bottom line, and your ability to provide service again to your users as quickly as possible.  

As well as fostering IT resilience in your infrastructure, applications and services, you also need to actively foster cultural resilience within your teams.

Employees need regular training on best practices in change management and incident response. Clear, well-documented processes need to be in place – and followed – for handling updates and changes. A just, blameless culture is also important. As we saw in The SRE Report 2023, it’s not just a matter of engendering a just culture for the sake of it, our data revealed that the majority of Elite organizations (per DORA) are “very” or “extremely” blameless.  

The scale of this outage is going to cause a lot of internal debate and very likely for IT/business pendulums to swing to extremes in an effort to stem future ramifications at scale. It will be useful to have a focused team of tech and business leaders in place to help protect internal teams from swinging too far in any direction and focusing jointly on what makes sense for your specific business and/or area of operations.

Don’t take digital resilience for granted

As we said, the ramifications of this outage are still unfolding. Nonetheless, we all know one thing: there will be another not far behind. Digital resilience cannot be taken for granted.

Assess the current state of digital and Internet Resilience in Catchpoint’s inaugural report: https://www.catchpoint.com/asset/internet-resilience-report-2024 (No registration required)

Featured image source: Smishra1, CC BY-SA 4.0, via Wikimedia Commons

This is some text inside of a div block.

You might also like

Blog post

The Need for Speed: Highlights from IBM and Catchpoint’s Global DNS Performance Study

Blog post

Customer Survey 2024: Unveiling insights and impact

Blog post

Learnings from ServiceNow’s Proactive Response to a Network Breakdown