A couple of months ago, someone asked me what’s the best risk management strategy that I have come across. Pat came my reply: MITRE ATT&CK.
The person on the other end was surprised.
The chasm between enterprise risk management and threat detection is so wide and deep that people, including top CISOs, somehow treat them as two disciplines.
I am not an auditor. But I have been a silent witness to the fact that our current approaches towards evaluating risk make no sense at all. The “GRC methodology,” at whose altar we all kneel, still doesn’t have a sound foundation in quantifiable and qualifiable markers — derived from somewhat objective rather than wholly subjective or experiential observables.
Any organisation worth getting hacked has been hacked — no matter what their control frameworks were spouting. It is almost as if risk was divergent with the threat.
My understanding of risk has been shaped by Dan Geer.
But let me be clear about one thing that may make cybersecurity different than all else and that is that we have sentient opponents. The physicist does not. The chemist does not. Not even the economist has sentient opponents. We do…There is something different when what we can detect and from which we can then infer is even partly under the control of people at cross purposes to our purposes.
And he also leads us to the perfect definition of what a risk control aims to achieve:
I also want it because of my own definition of security: The absence of unmitigatable surprise. As always in cybersecurity, we are now talking tradeoffs. One of those is in deciding how many failures is the right number of failures. It can’t be unbounded; that’s obvious. It can’t be zero, either, as zero quite likely means that you are overspending and, in any case, learning from failure is especially crisp; as Francis Bacon said “Truth emerges more readily from error than from confusion.”
Defining a state of security as the absence of unmitigatable surprise mirrors what I know to call “the availability calculus,” namely that we can get 100% availability by driving the time between failures to infinity or by driving the time to repair to zero. I am searching for prediction because I want to drive to infinity the time between failures for which I have no mitigation and drive to zero the time to repair failures for which I do have mitigation(s). That is where a focus on control leads you, or so I now think.
This school of thought is going official now. ATT&CK has just announced security control framework mappings for NIST 800-53 — that, too, expressed in machine-readable STIX 2.0 Domain and Relationship objects.