By David Franklin, MPIA, PSP | Co-Founder, Convoy Group
Many high-tech organizations fail to understand the complex systems they operate or are trying to influence. Many more fail to identify and stop catastrophes that could be averted through careful analysis. Intelligence scholars such as Richard K. Betts have labeled many of these failures as inevitable and, indeed, natural; while the sociologist Charles Perrow has demonstrated that a system’s characteristics – or how a system interacts with itself – can lead to what we call “normal accidents.”
The use of false analogies in decision-making led to the disastrous Bay of Pigs Invasion; “high complexity and tight coupling” were two of the underlying reasons for the Fukushima Daiichi disaster; and in August 2025, an explosion killed two and injured eleven workers at the Clairton Coke Works just outside of Pittsburgh – initial reporting indicates that U.S. Steel “did not have a specific procedure” in place to address this critical function.
After years of conducting post-hoc security audits and rebuilding safety programs for organizations across the globe, I have learned that some of the most catastrophic organizational failures emerge from the complex interactions of systems the organizations thought they understood, revealing vulnerabilities the organizations never knew existed. Most of the time, these failures were not due to a lack of organizational resources available to identify vulnerabilities, they were due to a lack of effort and, to draw a parallel with another foreseeable catastrophe that could have been averted, a “failure of imagination.”
I have learned that some of the most catastrophic organizational failures emerge from the complex interactions of systems the organizations thought they understood, revealing vulnerabilities the organizations never knew existed.
For technology companies and their trusted logistical partners, this is an impossible reality to ignore, and the ostrich approach to risk management (yes, I know ostriches don’t actually bury their heads in the sand) only works when nothing goes wrong. The Covid-19 pandemic painfully demonstrated that, as our systems grow more interconnected and our operational tempo accelerates, we create the perfect conditions for normal accidents – failures that are virtually inevitable given the often complex, tightly-coupled nature of modern technological systems and supply chains. Understanding these obscure yet identifiable risks is what separates future-focused, resilient organizations from ones that lose 400 million Euros because their supplier’s semiconductor plant was struck by lightning.
Most enterprise risk management frameworks excel at identifying surface-level threats. Cybersecurity vulnerabilities get patched. Financial controls prevent fraud. Compliance programs address regulatory exposure. Physical access control technology gets installed. But these approaches assume the organizations know where to look. Moreover, the organization’s security architecture often lacks the kind of integration with other risk areas necessary for a holistic assessment. For example, cybersecurity is often siloed from physical security, they both don’t regularly sync operations with on-the-ground facility managers, and none of the vice presidents or directors understand how an irregular conflict in the Suez Canal will continue to strain their just-in-time supply chains, let alone what to do about it.
Perrow's work on Normal Accident Theory is significant in this regard. His analysis of the Three Mile Island nuclear accident revealed that the disaster wasn't caused by a single catastrophic failure, but by unexpected interactions between multiple small failures in a complex system. The operators made reasonable decisions based on the information available to them, yet the system still failed catastrophically. This is the essence of a "normal accident” – not a failure of individual components or one human’s judgment, but an inevitable consequence of system characteristics: in this case, interactive complexity and tight coupling.
For context, interactive complexity refers to unfamiliar or unplanned sequences of events in a system that are not visible or immediately comprehensible to operators. Tight coupling means processes have little slack between components, are highly time-dependent with invariant sequences, and changes in one part rapidly affect others. When both conditions exist simultaneously, accidents become inevitable because they are inherent properties of the system's architecture.
We use a concept called "center of gravity analysis" to identify the sources of strength that, if compromised, would lead to systemic failure. The RAND Corporation's Vulnerability Assessment Method (VAM) provides a framework for this analysis, examining not just individual vulnerabilities but how they interconnect within broader systems.
For technology companies, your operational center of gravity (COG) might be cloud infrastructure, a critical vendor relationship, or a key technical team. But identifying the COG isn't enough – you must also understand the critical requirements that enable it to function and the critical vulnerabilities that threaten those requirements.
The VAM methodology forces you to map these relationships systematically, revealing vulnerabilities that wouldn't surface in traditional risk assessments. It asks uncomfortable questions. For example: What happens when your redundant systems fail simultaneously because they share hidden dependencies?
Carl von Clausewitz argued that in any system, there's usually one element with enough 'centripetal force' to hold it together. For organizations, this center of gravity isn't always what you'd expect. In this context, a COG describes "the one element within… [an] entire structure or system that has the necessary centripetal force to hold that structure together.” In other words, it is the focal point where sufficient interdependence exists among various parts to form an overarching system that acts with substantial unity. Applied to organizational risk, this means identifying critical assets as well as the dependencies and interconnections that enable your entire operational system – or at the very least a critical component of that system – to function cohesively. What makes this especially significant in modern organizations is that low-level dependencies such as a single vendor relationship, a key technical process, or even an individual with immediately irreplaceable institutional knowledge can function as the organizational center of gravity, meaning their compromise produces strategic-level effects far beyond what their apparent importance would suggest.
Organizations that excel at identifying and mitigating obscure risks gain significant competitive advantage through fewer catastrophic failures, faster recovery from inevitable incidents, and stakeholder confidence through demonstrated resilience. More importantly, they develop organizational learning capabilities that enable continuous adaptation as new threats emerge.
This is the essence of what should be brought to security consulting engagements. Strictly formulaic and one-size-fits all security audits rarely capture an organization’s most critical vulnerabilities. Yes, those are an important part of security risk management activities, but fungibility, creativity, and critical thinking are what differentiate valuable deliverables from paperweight reports.
The most dangerous organizational risks are almost always hidden dependencies, unanticipated interactions, and small deviations that compound into systemic failures. Building the capability to identify these obscure risks is essential for organizations operating in high-complexity, high-consequence environments.