Category Archives: Incident

CrowdStrike IT Outage Explained by a Windows Developer

Understanding the CrowdStrike IT Outage: Insights from a Former Windows Developer

Introduction 

Hey, I’m Dave. Welcome to my shop.

I’m Dave Plummer, a retired software engineer from Microsoft, going back to the MS-DOS and Windows 95 days. Thanks to my time as a Windows developer, today I’m going to explain what the CrowdStrike issue actually is, the key difference in kernel mode, and why these machines are bluescreening, as well as how to fix it if you come across one.

Now, I’ve got a lot of experience waking up to bluescreens and having them set the tempo of my day, but this Friday was a little different. However, first off, I’m retired now, so I don’t debug a lot of daily blue screens. And second, I was traveling in New York City, which left me temporarily stranded as the airlines sorted out the digital carnage.

But that downtime gave me plenty of time to pull out the old MacBook and figure out what was happening to all the Windows machines around the world. As far as we know, the CrowdStrike bluescreens that we have been seeing around the world for the last several days are the result of a bad update to the CrowdStrike software. But why? Today I want to help you understand three key things.

Key Points

  • Why the CrowdStrike software is on the machines at all.
  • What happens when a kernel driver like CrowdStrike fails.
  • Precisely why the CrowdStrike code faults and brings the machines down, and how and why this update caused so much havoc.

Handling Crashes at Microsoft 

As systems developers at Microsoft in the 1990s, handling crashes like this was part of our normal bread and butter. Every dev at Microsoft, at least in my area, had two machines. For example, when I started in Windows NT, I had a Gateway 486 DX 250 as my main dev machine, and then some old 386 box as the debug machine. Normally you would run your test or debug bits on the debug machine while connected to it as the debugger from your good machine.

Anti-Stress Process 

On nights and weekends, however, we did something far more interesting. We ran a process called Anti-Stress. Anti-Stress was a bundle of tests that would automatically download to the test machines and run under the debugger. So every night, every test machine, along with all the machines in the various labs around campus, would run Anti-Stress and put it through the gauntlet.

The stress tests were normally written by our test engineers, who were software developers specially employed back in those days to find and catch bugs in the system. For example, they might write a test to simply allocate and use as many GDI brush handles as possible. If doing so causes the drawing subsystem to become unstable or causes some other program to crash, then it would be caught and stopped in the debugger immediately.

The following day, all of the crashes and assertions would be tabulated and assigned to an individual developer based on the area of code in which the problem occurred. As the developer responsible, you would then use something like Telnet to connect to the target machine, debug it, and sort it out.

Debugging in Assembly Language 

All this debugging was done in assembly language, whether it was Alpha, MIPS, PowerPC, or x86, and with minimal symbol table information. So it’s not like we had Visual Studio connected. Still, it was enough information to sort out most crashes, find the code responsible, and either fix it or at least enter a bug to track it in our database.

Kernel Mode versus User Mode 

The hardest issues to sort out were the ones that took place deep inside the operating system kernel, which executes at ring zero on the CPU. The operating system uses a ring system to bifurcate code into two distinct modes: kernel mode for the operating system itself and user mode, where your applications run. Kernel mode does tasks such as talking to the hardware and the devices, managing memory, scheduling threads, and all of the really core functionality that the operating system provides.

Application code never runs in kernel mode, and kernel code never runs in user mode. Kernel mode is more privileged, meaning it can see the entire system memory map and what’s in memory at any physical page. User mode only sees the memory map pages that the kernel wants you to see. So if you’re getting the sense that the kernel is very much in control, that’s an accurate picture.

Even if your application needs a service provided by the kernel, it won’t be allowed to just run down inside the kernel and execute it. Instead, your user thread will reach the kernel boundary and then raise an exception and wait. A kernel thread on the kernel side then looks at the specified arguments, fully validates everything, and then runs the required kernel code. When it’s done, the kernel thread returns the results to the user thread and lets it continue on its merry way.

Why Kernel Crashes Are Critical 

There is one other substantive difference between kernel mode and user mode. When application code crashes, the application crashes. When kernel mode crashes, the system crashes. It crashes because it has to. Imagine a case where you had a really simple bug in the kernel that freed memory twice. When the kernel code detects that it’s about to free already freed memory, it can detect that this is a critical failure, and when it does, it blue screens the system, because the alternatives could be worse.

Consider a scenario where this double freed code is allowed to continue, maybe with an error message, maybe even allowing you to save your work. The problem is that things are so corrupted at this point that saving your work could do more damage, erasing or corrupting the file beyond repair. Worse, since it’s the kernel system that’s experiencing the issue, application programs are not protected from one another in the same way. The last thing you want is solitaire triggering a kernel bug that damages your git enlistment.

And that’s why when an unexpected condition occurs in the kernel, the system is just halted. This is not a Windows thing by any stretch. It is true for all modern operating systems like Linux and macOS as well. In fact, the biggest difference is the color of the screen when the system goes down. On Windows, it’s blue, but on Linux it’s black, and on macOS, it’s usually pink. But as on all systems, a kernel issue is a reboot at a minimum.

What Runs in Kernel Mode 

Now that we know a bit about kernel mode versus user mode, let’s talk about what specifically runs in kernel mode. And the answer is very, very little. The only things that go in the kernel mode are things that have to, like the thread scheduler and the heap manager and functionality that must access the hardware, such as the device driver that talks to a GPU across the PCIe bus. And so the totality of what you run in kernel mode really comes down to the operating system itself and device drivers.

And that’s where CrowdStrike enters the picture with their Falcon sensor. Falcon is a security product, and while it’s not just simply an antivirus, it’s not that far off the mark to look at it as though it’s really anti-malware for the server. But rather than just looking for file definitions, it analyzes a wide range of application behavior so that it can try to proactively detect new attacks before they’re categorized and listed in a formal definition.

CrowdStrike Falcon Sensor 

To be able to see that application behavior from a clear vantage point, that code needed to be down in the kernel. Without getting too far into the weeds of what CrowdStrike Falcon actually does, suffice it to say that it has to be in the kernel to do it. And so CrowdStrike wrote a device driver, even though there’s no hardware device that it’s really talking to. But by writing their code as a device driver, it lives down with the kernel in ring zero and has complete and unfettered access to the system, data structures, and the services that they believe it needs to do its job.

Everybody at Microsoft and probably at CrowdStrike is aware of the stakes when you run code in kernel mode, and that’s why Microsoft offers the WHQL certification, which stands for Windows Hardware Quality Labs. Drivers labeled as WHQL certified have been thoroughly tested by the vendor and then have passed the Windows Hardware Lab Kit testing on various platforms and configurations and are signed digitally by Microsoft as being compatible with the Windows operating system. By the time a driver makes it through the WHQL lab tests and certifications, you can be reasonably assured that the driver is robust and trustworthy. And when it’s determined to be so, Microsoft issues that digital certificate for that driver. As long as the driver itself never changes, the certificate remains valid.

CrowdStrike’s Agile Approach 

But what if you’re CrowdStrike and you’re agile, ambitious, and aggressive, and you want to ensure that your customers get the latest protection as soon as new threats emerge? Every time something new pops up on the radar, you could make a new driver and put it through the Hardware Quality Labs, get it certified, signed, and release the updated driver. And for things like video cards, that’s a fine process. I don’t actually know what the WHQL turnaround time is like, whether that’s measured in days or weeks, but it’s not instant, and so you’d have a time window where a zero-day attack could propagate and spread simply because of the delay in getting an updated CrowdStrike driver built and signed.

Dynamic Definition Files 

What CrowdStrike opted to do instead was to include definition files that are processed by the driver but not actually included with it. So when the CrowdStrike driver wakes up, it enumerates a folder on the machine looking for these dynamic definition files, and it does whatever it is that it needs to do with them. But you can already perhaps see the problem. Let’s speculate for a moment that the CrowdStrike dynamic definition files are not merely malware definitions but complete programs in their own right, written in a p-code that the driver can then execute.

In a very real sense, then the driver could take the update and actually execute the p-code within it in kernel mode, even though that update itself has never been signed. The driver becomes the engine that runs the code, and since the driver hasn’t changed, the cert is still valid for the driver. But the update changes the way the driver operates by virtue of the p-code that’s contained in the definitions, and what you’ve got then is unsigned code of unknown provenance running in full kernel mode.

All it would take is a single little bug like a null pointer reference, and the entire temple would be torn down around us. Put more simply, while we don’t yet know the precise cause of the bug, executing untrusted p-code in the kernel is risky business at best and could be asking for trouble.

Post-Mortem Debugging 

We can get a better sense of what went wrong by doing a little post-mortem debugging of our own. First, we need to access a crash dump report, the kind you’re used to getting in the good old NT days but are now hidden behind the happy face blue screen. Depending on how your system is configured, though, you can still get the crash dump info. And so there was no real shortage of dumps around to look at. Here’s an example from Twitter, so let’s take a look. About a third of the way down, you can see the offending instruction that caused the crash.

It’s an attempt to move data to register nine by loading it from a memory pointer in register eight. Couldn’t be simpler. The only problem is that the pointer in register eight is garbage. It’s not a memory address at all but a small integer of nine c hex, which is likely the offset of the field that they’re actually interested in within the data structure. But they almost certainly started with a null pointer, then added nine c to it, and then just dereferenced it.

CrowdStrike driver woes

Now, debugging something like this is often an incremental process where you wind up establishing, “Okay, so this bad thing happened, but what happened upstream beforehand to cause the bad thing?” And in this case, it appears that the cause is the dynamic data file downloaded as a sys file. Instead of containing p-code or a malware definition or whatever was supposed to be in the file, it was all just zeros.

We don’t know yet how or why this happened, as CrowdStrike hasn’t publicly released that information yet. What we do know to an almost certainty at this point, however, is that the CrowdStrike driver that processes and handles these updates is not very resilient and appears to have inadequate error checking and parameter validation.

Parameter validation means checking to ensure that the data and arguments being passed to a function, and in particular to a kernel function, are valid and good. If they’re not, it should fail the function call, not cause the entire system to crash. But in the CrowdStrike case, they’ve got a bug they don’t protect against, and because their code lives in ring zero with the kernel, a bug in CrowdStrike will necessarily bug check the entire machine and deposit you into the very dreaded recovery bluescreen.

Windows Resilience 

Even though this isn’t a Windows issue or a fault with Windows itself, many people have asked me why Windows itself isn’t just more resilient to this type of issue. For example, if a driver fails during boot, why not try to boot next time without it and see if that helps?

And Windows, in fact, does offer a number of facilities like that, going back as far as booting NT with the last known good registry hive. But there’s a catch, and that catch is that CrowdStrike marked their driver as what’s known as a bootstart driver. A bootstart driver is a device driver that must be installed to start the Windows operating system.

Most bootstart drivers are included in driver packages that are in the box with Windows, and Windows automatically installs these bootstart drivers during their first boot of the system. My guess is that CrowdStrike decided they didn’t want you booting at all without their protection provided by their system, but when it crashes, as it does now, your system is completely borked.

Fixing the Issue 

Fixing a machine with this issue is fortunately not a great deal of work, but it does require physical access to the machine. To fix a machine that’s crashed due to this issue, you need to boot it into safe mode, because safe mode only loads a limited set of drivers and mercifully can still contend without this boot driver.

You’ll still be able to get into at least a limited system. Then, to fix the machine, use the console or the file manager and go to the path window like windows, and then system32/drivers/crowdstrike. In that folder, find the file matching the pattern c and then a bunch of zeros 291 sys and delete that file or anything that’s got the 291 in it with a bunch of zeros. When you reboot, your system should come up completely normal and operational.

The absence of the update file fixes the issue and does not cause any additional ones. It’s a fair bet that the update 291 won’t ever be needed or used again, so you’re fine to nuke it.

Conclusion 

Further references 

 CrowdStrike IT Outage Explained by a Windows DeveloperYouTube · Dave’s Garage13 minutes, 40 seconds2 days ago

The Aftermath of the World’s Biggest IT Outage

The Great Digital Blackout: Fallout from the CrowdStrike-Microsoft Outage

i. Introduction 

On a seemingly ordinary Friday morning, the digital world shuddered. A global IT outage, unprecedented in its scale, brought businesses, governments, and individuals to a standstill. The culprit: a faulty update from cybersecurity firm CrowdStrike, clashing with Microsoft Windows systems. The aftershocks of this event, dubbed the “Great Digital Blackout,” continue to reverberate, raising critical questions about our dependence on a handful of tech giants and the future of cybersecurity.

ii. The Incident

A routine software update within Microsoft’s Azure cloud platform inadvertently triggered a cascading failure across multiple regions. This outage, compounded by a simultaneous breach of CrowdStrike’s security monitoring systems, created a perfect storm of disruption. Within minutes, critical services were rendered inoperative, affecting millions of users and thousands of businesses worldwide. The outage persisted for 48 hours, making it one of the longest and most impactful in history.

iii. Initial Reports and Response

The first signs that something was amiss surfaced around 3:00 AM UTC when users began reporting issues accessing Microsoft Azure and Office 365 services. Concurrently, Crowdstrike’s Falcon platform started exhibiting anomalies. By 6:00 AM UTC, both companies acknowledged the outage, attributing the cause to a convergence of system failures and a sophisticated cyber attack exploiting vulnerabilities in their systems.

Crowdstrike and Microsoft activated their incident response protocols, working around the clock to mitigate the damage. Microsoft’s global network operations team mobilized to isolate affected servers and reroute traffic, while Crowdstrike’s cybersecurity experts focused on containing the breach and analyzing the attack vectors.

iv. A Perfect Storm: Unpacking the Cause

A. The outage stemmed from a seemingly innocuous update deployed by CrowdStrike, a leading provider of endpoint security solutions. The update, intended to bolster defenses against cyber threats, triggered a series of unforeseen consequences. It interfered with core Windows functionalities, causing machines to enter a reboot loop, effectively rendering them unusable.

B. The domino effect was swift and devastating. Businesses across various sectors – airlines, hospitals, banks, logistics – found themselves crippled. Flights were grounded, financial transactions stalled, and healthcare operations were disrupted.

C. The blame game quickly ensued. CrowdStrike, initially silent, eventually acknowledged their role in the outage and apologized for the inconvenience. However, fingers were also pointed at Microsoft for potential vulnerabilities in their Windows systems that allowed the update to wreak such havoc.

v. Immediate Consequences (Businesses at a Standstill)

The immediate impact of the outage was felt by businesses worldwide. 

A. Microsoft: Thousands of companies dependent on Microsoft’s Azure cloud services found their operations grinding to a halt. E-commerce platforms experienced massive downtimes, losing revenue by the minute. Hospital systems relying on cloud-based records faced critical disruptions, compromising patient care.

Businesses dependent on Azure’s cloud services for their operations found themselves paralyzed. Websites went offline, financial transactions were halted, and communication channels were disrupted. 

B. Crowdstrike: Similarly, Crowdstrike’s clientele, comprising numerous Fortune 500 companies, grappled with the fallout. Their critical security monitoring and threat response capabilities were significantly hindered, leaving them vulnerable.

vi. Counting the Costs: Beyond Downtime

The human and economic toll of the Great Digital Blackout is still being calculated. While initial estimates suggest billions of dollars in lost productivity, preliminary estimates suggest that the outage resulted in global economic losses exceeding $200 billion, the true cost extends far beyond financial figures. Businesses across sectors reported significant revenue losses, with SMEs particularly hard-hit. Recovery and mitigation efforts further strained financial resources, and insurance claims surged as businesses sought to recoup their losses.

  • Erosion of Trust: The incident exposed the fragility of our increasingly digital world, eroding trust in both CrowdStrike and Microsoft. Businesses and organizations now question the reliability of security solutions and software updates.
  • Supply Chain Disruptions: The interconnectedness of global supply chains was thrown into disarray.Manufacturing, shipping, and logistics faced delays due to communication breakdowns and the inability to process orders electronically.
  • Cybersecurity Concerns: The outage highlighted the potential for cascading effects in cyberattacks. A seemingly minor breach in one system can have a devastating ripple effect across the entire digital ecosystem.

vii. Reputational Damage

Both Microsoft and CrowdStrike suffered severe reputational damage. Trust in Microsoft’s Azure platform and CrowdStrike’s cybersecurity solutions was shaken. Customers, wary of future disruptions, began exploring alternative providers and solutions. The incident underscored the risks of over-reliance on major service providers and ignited discussions about diversifying IT infrastructure.

viii. Regulatory Scrutiny

In the wake of the outage, governments and regulatory bodies worldwide called for increased oversight and stricter regulations. The incident highlighted the need for robust standards to ensure redundancy, effective backup systems, and rapid recovery protocols. In the United States, discussions about enhancing the Cybersecurity Maturity Model Certification (CMMC) framework gained traction, while the European Union considered expanding the scope of the General Data Protection Regulation (GDPR) to include mandatory resilience standards for IT providers.

ix. Data Security and Privacy Concerns

One of the most concerning aspects of the outage was the potential exposure of sensitive data. Both Microsoft and Crowdstrike store vast amounts of critical and confidential data. Although initial investigations suggested that the attackers did not exfiltrate data, the sheer possibility raised alarms among clients and regulatory bodies worldwide.

Governments and compliance agencies intensified their scrutiny, reinforcing the need for robust data protection measures. Customers demanded transparency about what data, if any, had been compromised, leading to an erosion of trust in cloud services.

x. Root Causes and Analysis

Following the containment of the outage, both Crowdstrike and Microsoft launched extensive investigations to determine the root causes. Preliminary reports cited a combination of factors:

A. Zero-Day Exploits: The attackers leveraged zero-day vulnerabilities in both companies’ systems, which had not been previously detected or patched.   

B. Supply Chain Attack: A key supplier providing backend services to both companies was compromised, allowing the attackers to penetrate deeper into their networks.

C. Human Error: Configuration errors and lack of stringent security checks at critical points amplified the impact of the vulnerabilities.

D. Coordinated Attack: Cybersecurity analysts suggested that the attack bore the hallmarks of a highly coordinated and well-funded group, potentially a nation-state actor, given the sophistication and scale. The alignment of the outage across multiple critical services pointed to a deliberate and strategic attempt to undermine global technological infrastructure.

xi. Response Strategies

A. CrowdStrike’s Tactics

  • Swift Containment: Immediate action was taken to contain the breach. CrowdStrike’s incident response teams quickly identified and isolated the compromised segments of their network to prevent further penetration.
  • Vulnerability Mitigation: Patches were rapidly developed and deployed to close the exploited security gaps. Continuous monitoring for signs of lingering threats or additional vulnerabilities was intensified.
  • Client Communication: Transparency became key. CrowdStrike maintained open lines of communication with its clients, providing regular updates, guidance on protective measures, and reassurance to mitigate the trust deficit.

B. Microsoft’s Actions

  • Global Response Scaling: Leveraging its extensive resources, Microsoft scaled up its global cybersecurity operations. Frantic efforts were made to stabilize systems, restore services, and strengthen defenses against potential residual threats.
  • Service Restoration: Microsoft prioritized the phased restoration of services. This approach ensured that each phase underwent rigorous security checks to avoid reintroducing vulnerabilities.
  • Collaboration and Information Sharing: Recognizing the widespread impact, Microsoft facilitated collaboration with other tech firms, cybersecurity experts, and government agencies. Shared intelligence helped in comprehending the attack’s full scope and in developing comprehensive defense mechanisms.

xii. Broad Implications 

A. Evolving Cyber Threat Landscape

  • Increased Sophistication: The attack underscored the evolving sophistication of cyber threats. Traditional security measures are proving insufficient against highly organized and well-funded adversaries.
  • Proactive Security Posture: The event emphasized the need for a proactive security stance, which includes real-time threat intelligence, continuous system monitoring, and regular vulnerability assessments.

B. Trust in Cloud Computing

  • Cloud Strategy Reevaluation: The reliance on cloud services came under scrutiny. Organizations began rethinking their cloud strategies, weighing the advantages against the imperative of reinforcing security protocols.
  • Strengthened Security Measures: There is a growing emphasis on bolstering supply chain security. Companies are urged to implement stringent controls, cross-verify practices with their vendors, and engage in regular security audits.

xiii. A Catalyst for Change: Lessons Learned

The Great Digital Blackout serves as a stark reminder of the need for a comprehensive reevaluation of our approach to cybersecurity and technology dependence. Here are some key takeaways:

  • Prioritize Security by Design: Software development and security solutions need to prioritize “security by design” principles. Rigorous testing and vulnerability assessments are crucial before deploying updates.
  • Enhanced Cybersecurity: The breach of CrowdStrike’s systems highlighted potential vulnerabilities in cybersecurity frameworks. Enhanced security measures and continuous monitoring are vital to prevent similar incidents.
  • Diversity and Redundancy: Over-reliance on a few tech giants can be a vulnerability. Diversifying software and service providers, coupled with built-in redundancies in critical systems, can mitigate the impact of such outages.
  • Redundancy and Backup: The incident underscored the necessity of having redundant systems and robust backup solutions. Businesses are now more aware of the importance of investing in these areas to ensure operational continuity during IT failures.
  • Disaster Recovery Planning: Effective disaster recovery plans are critical. Regular drills and updates to these plans can help organizations respond more efficiently to disruptions.
  • Communication and Transparency: Swift, clear communication during disruptions is essential. Both CrowdStrike and Microsoft initially fell short in this area, causing confusion and exacerbating anxieties.
  • Regulatory Compliance: Adhering to evolving regulatory standards and being proactive in compliance efforts can help businesses avoid penalties and build resilience.
  • International Collaboration: Cybersecurity threats require an international response. Collaboration between governments, tech companies, and security experts is needed to develop robust defense strategies and communication protocols.

xiv. The Road to Recovery: Building Resilience

The path towards recovery from the Great Digital Blackout is multifaceted. It involves:

  • Post-Mortem Analysis: Thorough investigations by CrowdStrike, Microsoft, and independent bodies are needed to identify the root cause of the outage and prevent similar occurrences.
  • Investing in Cybersecurity Awareness: Educating businesses and individuals about cyber threats and best practices is paramount. Regular training and simulation exercises can help organizations respond more effectively to future incidents.
  • Focus on Open Standards: Promoting open standards for software and security solutions can foster interoperability and potentially limit the impact of individual vendor issues.

xv. A New Era of Cybersecurity: Rethinking Reliance

The Great Digital Blackout serves as a wake-up call. It underscores the need for a more robust, collaborative, and adaptable approach to cybersecurity. By diversifying our tech infrastructure, prioritizing communication during disruptions, and fostering international cooperation, we can build a more resilient digital world.

The event also prompts a conversation about our dependence on a handful of tech giants. While these companies have revolutionized our lives, the outage highlighted the potential pitfalls of such concentrated power.

xvi. Conclusion 

The future of technology may involve a shift towards a more decentralized model, with greater emphasis on data sovereignty and user control. While the full impact of the Great Digital Blackout is yet to be fully understood, one thing is certain – the event has irrevocably altered the landscape of cybersecurity, prompting a global conversation about how we navigate the digital age with greater awareness and resilience.

This incident serves as a stark reminder of the interconnected nature of our digital world. As technology continues to evolve, so too must our approaches to managing the risks it brings. The lessons learned from this outage will undoubtedly shape the future of IT infrastructure, making it more robust, secure, and capable of supporting the ever-growing demands of the digital age.

xvii. Further references 

Microsoft IT outages live: Dozens more flights cancelled …The Independenthttps://www.independent.co.uk › tech › microsoft-crow…

Helping our customers through the CrowdStrike outageMicrosofthttps://news.microsoft.com › en-hk › 2024/07/21 › helpi…

CrowdStrike-Microsoft Outage: What Caused the IT MeltdownThe New York Timeshttps://www.nytimes.com › 2024/07/19 › business › mi…

Microsoft IT outage live: Millions of devices affected by …The Independenthttps://www.independent.co.uk › tech › microsoft-outa…

What’s next for CrowdStrike, Microsoft after update causes …USA Todayhttps://www.usatoday.com › story › money › 2024/07/20

CrowdStrike and Microsoft: What we know about global IT …BBChttps://www.bbc.com › news › articles

Chaos persists as IT outage could take time to fix …BBChttps://www.bbc.com › news › live

Huge Microsoft Outage Linked to CrowdStrike Takes Down …WIREDhttps://www.wired.com › Security › security

CrowdStrike’s Role In the Microsoft IT Outage, ExplainedTime Magazinehttps://time.com › Tech › Internet

Crowdstrike admits ‘defect’ in software update caused IT …Euronews.comhttps://www.euronews.com › Next › Tech News

Microsoft: CrowdStrike Update Caused Outage For 8.5 …CRNhttps://www.crn.com › news › security › microsoft-cro…

It could take up to two weeks to resolve ‘teething issues …Australian Broadcasting Corporationhttps://www.abc.net.au › news › microsoft-says-crowdst…

Microsoft-CrowdStrike Outage Causes Chaos for Flights …CNEThttps://www.cnet.com › Tech › Services & Software

ISO/IEC 27001 and ISO/IEC 27035: Building a Resilient Cybersecurity Strategy

Building a Resilient Cybersecurity Strategy with ISO/IEC 27001 and ISO/IEC 27035

ISO/IEC 27001 (ISMS – Information Security Management System) and ISO/IEC 27035 (Information Security Incident Management) are two key standards in the ISO 27000 family that provide a robust and effective framework for setting up and managing cybersecurity. 

They assist organizations in building a resilient cybersecurity strategy.

i. Here’s how the two standards can be used to build a robust cybersecurity strategy:

A. ISO/IEC 27001:

a. Establish, Implement, and Operate an ISMS: ISO/IEC 27001 provides a systematic approach for establishing, implementing, operating, monitoring, maintaining, and improving an ISMS. The ISMS is a set of policies and procedures that includes all legal, physical, and technical controls involved in an organization’s information risk management processes.

b. Regular Risk Assessments: The standard encourages regular information security risk assessments to identify cybersecurity risks and set control objectives.

c. Compliance with Laws and Regulations: ISO/IEC 27001 can help organizations stay compliant with regulations as they relate to data protection and cybersecurity. 

d. Continual Improvement: The standard follows the Plan-Do-Check-Act (PDCA) model, which means that the ISMS should continually be reviewed and improved upon.

B. ISO/IEC 27035:

a. Manage Security Incidents: ISO/IEC 27035 provides guidelines for the process of managing information security incidents, including identification, reporting, assessment, response, and learning from incidents to prevent them from recurring.

b. Improved Incident Response: The implementation of ISO/IEC 27035 helps organizations improve their response to incidents, leading to reduced damage, improved recovery time, and increased ability to provide necessary evidence for any legal action that may be required.

c. Proactive and Reactive Management: The standard allows for both reactive and proactive management of incidents.

ii. This is where the synergy of ISO/IEC 27001 and ISO/IEC 27035 comes in.

A. ISO/IEC 27001: The Foundation for Information Security Management

This internationally recognized standard provides a framework for establishing an Information Security Management System (ISMS). It helps you identify and analyze your organization’s information security risks, implement appropriate controls, and continuously improve your security posture. 

o Key benefits of ISO/IEC 27001:

o Systematic approach: Creates a structured framework for managing information security across all departments.

o Proactive risk management: Identifies and mitigates potential threats before they can cause harm.

o Improved compliance: Aligns with a wide range of regulations and industry best practices.

o Enhanced stakeholder confidence: Demonstrates your commitment to information security.

B. ISO/IEC 27035: Incident Response Excellence

This standard complements ISO/IEC 27001 by providing a robust framework for incident response. It outlines the processes and procedures for detecting, responding to, and recovering from security incidents effectively.

o  Key benefits of ISO/IEC 27035:

o Reduced impact of incidents: Minimizes damage and downtime caused by cyberattacks.

o Faster recovery times: Enables a swift and coordinated response to security incidents.

o Improved communication: Clearly defines roles and responsibilities for incident response activities.

o Lessons learned: Helps you learn from incidents and improve your security posture.

iii. Synergy for a Resilient Strategy:

Combining the proactive risk management of ISO/IEC 27001 with the incident response capabilities of ISO/IEC 27035 creates a holistic and resilient cybersecurity strategy. This integrated approach offers several advantages:

o Comprehensive risk mitigation: Proactive controls prevent incidents while effective response minimizes their impact.

o Enhanced preparedness: Defined processes ensure a coordinated and efficient response to security threats.

o Continuous improvement: Lessons learned from incidents inform future risk management efforts.

iv. Building a resilient cybersecurity strategy with ISO/IEC 27001 and ISO/IEC 27035 involves the following steps:

A. ISO/IEC 27001 Implementation:

   o Identify and assess information assets and associated risks.

   o Develop an Information Security Management System (ISMS) based on ISO/IEC 27001 standards.

   o Establish and document security policies, procedures, and controls.

B. Risk Management:

   o Perform a thorough risk assessment using ISO/IEC 27001 guidelines.

   o Mitigate identified risks by implementing appropriate controls.

   o Regularly review and update risk assessments to adapt to changing threats.

C. Incident Response Planning (ISO/IEC 27035):

   o Develop an incident response plan aligned with ISO/IEC 27035 standards.

   o Establish an incident response team and define roles and responsibilities.

   o Conduct regular drills and simulations to ensure preparedness for cyber incidents.

D. Continuous Monitoring:

   o Implement continuous monitoring mechanisms to detect and respond to security incidents promptly.

   o Use security information and event management (SIEM) tools to monitor and analyze system activities.

E. Training and Awareness:

   o Provide comprehensive training on ISO/IEC 27001 and ISO/IEC 27035 principles for employees involved in security functions.

   o Foster a culture of cybersecurity awareness across the organization.

F. Compliance Management:

   o Ensure ongoing compliance with ISO/IEC 27001 requirements and other relevant regulations.

   o Regularly conduct internal audits to assess adherence to established standards.

G. Documentation and Records:

   o Maintain detailed documentation of security policies, procedures, and incident response plans.

   o Keep records of security incidents, investigations, and corrective actions taken.

H. Third-Party Collaboration:

   o Engage with external stakeholders, suppliers, and partners to align cybersecurity practices.

   o Include third-party risk assessments within your overall risk management strategy.

I. Review and Improvement:

   o Conduct regular reviews of your cybersecurity strategy, considering lessons learned from incidents and audits.

   o Implement improvements based on emerging threats and organizational changes.

v. To leverage these standards in building a resilient cybersecurity strategy:

o Integrate Both Standards: ISO/IEC 27001 and ISO/IEC 27035 should be integrated, using the broader security management controls of 27001 to support the incident management processes of 27035.

o Holistic Approach: Employ both standards for a holistic approach to cybersecurity that covers prevention, detection, response, and post-incident actions.

o Periodic Reviews: Implement periodic reviews and updates of policies, controls, plans, and procedures to ensure they are current and in alignment with these standards.

o Conduct thorough risk assessments.

o Ensure there’s leadership commitment and adequate resources available.

o Certification and Training: Consider achieving certification for both standards, which can increase stakeholder confidence and may provide a competitive advantage. Staff training in these standards can increase organizational resilience and readiness.

o Continuously monitor and improve upon your information security controls and responses.

vi. Conclusion: 

By building a cybersecurity strategy around ISO/IEC 27001 and ISO/IEC 27035, organizations can ensure they are well-prepared not only to protect their information assets but also to handle and recover from security incidents effectively. This approach positions an organization to better navigate the complexities of information security risk and the ever-evolving cybersecurity threat-scape.

Remember, securing your organization is an ongoing journey. By leveraging the combined power of ISO/IEC 27001 and 27035, you can build a resilient cybersecurity strategy that protects your assets, safeguards your operations, and fosters trust in the digital age.

vii. Additional Resources:

o International Organization for Standardization (ISO): [https://www.iso.org/home.html]

o International Electrotechnical Commission (IEC): [https://www.iec.ch/homepage]

o PECB o PECB Insights: [https://pecb.com/en/education-and-certification-for-individuals/iso-iec-27001]

ISO/IEC 27001 and ISO/IEC 27035: Building a Resilient …PECBhttps://pecb.com › article › isoiec-27001-and-isoiec-270…

ISO/IEC 27001 and ISO/IEC 27035: Building a Resilient …Medium · PECB1 month ago

Academy of Resilience & ContinuityXhttps://twitter.com › AcademyOfRC › status

ISO/IEC 27035-1:2023— Information Security ManagementAmerican National Standards Institute – ANSIhttps://blog.ansi.org › iso-iec-27035-1-2023-informati…

How Can ISO/IEC 27001 Help Organizations Align With the …SlideSharehttps://www.slideshare.net › Technology

CyBOK’s Security Operations & Incident  Knowledge Area

The Security Operations & Incident Management Knowledge Area in the Cyber Security Body of Knowledge (CyBOK) covers the essential procedures, technologies, and principles related to managing and responding to security incidents to limit their impact and prevent them from recurring.

i. Core Concepts:

    A. Monitor, Analyze, Plan, Execute (MAPE-K) Loop: The SOIM KA utilizes the MAPE-K loop as a foundational principle. This cyclical process continuously gathers information, assesses threats, plans responses, and executes actions, adapting to the evolving security landscape.

   B. Security Architecture: It emphasizes the importance of a well-defined security architecture with concepts like network segmentation, security zones, and data classification for effective monitoring and incident response.

   C. Incident Management: This is the core focus of the KA, outlining established frameworks like NIST SP 800-61 and best practices for detection, containment, eradication, recovery, and reporting of security incidents.

ii. Here is an outline of the key topics addressed within this area:

A. Security Operations Center (SOC): A central unit that deals with security issues on an organizational and technical level. The SOC team is responsible for the ongoing, operational component of enterprise information security.

B. Monitoring and Detection: This covers the fundamental concepts of cybersecurity monitoring and the techniques and systems used to detect abnormal behavior or transactions that may indicate a security incident.

C. Incident Detection and Analysis: Techniques for identifying suspicious activity, analyzing logs and alerts, and determining the scope and nature of incidents are explored.

D. Incident Response: A planned approach to managing the aftermath of a security breach or cyber attack, also known as an IT incident, computer incident, or security incident. The goal is to handle the situation in a way that limits damage and reduces recovery time and costs.

E. Forensics: This part involves investigation and analysis techniques to gather and preserve evidence from a particular computing device in a way that is suitable for presentation in a court of law.

F. Security Information and Event Management (SIEM): SIEM is an approach to security management that combines SIM (security information management) and SEM (security event management) functions into one security management system.

G. Business Continuity and Disaster Recovery (BCDR): The KA emphasizes the importance of robust BCDR plans to ensure operational continuity and data recovery in case of security incidents or other disruptions. These are the processes that an organization implements to recover and protect its business IT infrastructure in the event of a disaster. BCP guarantees that an organization can continue to function during and after a disaster.

H. Threat Intelligence: Gathering and analyzing threat intelligence plays a crucial role in proactive defense. The KA covers various sources of threat intelligence and its integration into security operations. This includes the collection and analysis of information regarding emerging or existing threat actors and threats to understand their motives, intentions, and methods.

iii. Benefits of Utilizing the SOIM KA:

A. Standardized Knowledge and Skills: The KA provides a common language and framework for security professionals, facilitating improved communication and collaboration within security teams.

B. Effective Incident Response: Implementing the principles and strategies outlined in the KA leads to more efficient and effective incident response, minimizing damage and downtime.

C. Cybersecurity Maturity: Integrating the SOIM KA into organizational security practices contributes to overall cybersecurity maturity, enhancing the organization’s resilience against cyber threats.

iv. Resources:

   o The CyBOK SOIM KA document is available for free download on the CyBOK website: [https://www.cybok.org/knowledgebase1_1/](https://www.cybok.org/knowledgebase1_1/)

   o Additional resources like presentations, webinars, and training materials are also available on the website.

The Security Operations & Incident Management Knowledge Area of CyBOK is essential to anyone responsible for maintaining an organization’s security posture and responding to security incidents.

By leveraging the CyBOK SOIM KA, cybersecurity professionals can gain valuable knowledge and skills to enhance their incident response capabilities, protect critical information, and ensure the resilience of their organizations in the face of ever-evolving cyber threats.

https://www.cybok.org/media/downloads/Security_Operations_Incident_Management_v1.0.2.pdf

https://uk.linkedin.com/posts/cybok_cybok-bristolbathcybercon22-activity-6982978125248786433-JbKz?trk=public_profile_like_view

https://qspace.qu.edu.qa/handle/10576/36779