Cyber security is a highly technical and largely technological subject. That disguises the fact that we still practise it as a craft, not a science. We have a series of ‘recipes’ (we call them Best Practices and international standards) that usually work well enough, but they have been compiled on the basis of common sense and experience, not analysis. That doesn’t stop us building acceptably secure cyber systems but it limits our ability to adapt and innovate.
It is as if we were Master Bakers in an age preceding chemistry. Consider that baker’s predicament. They have a number of recipes for making different types of bread, and they can reliably produce good loaves day after day provided they are careful how they follow the recipes, always use the same ingredients, and stick with the same oven. But they have no understanding of the basic chemistry that is going on as they make each loaf. They know all their recipes involve sugar and salt, flour and water, and a little of that fragile magic ingredient called yeast, but they don’t understand what is going on at a chemical level as the dough sits there rising. That limits their ability to innovate.
It means the only way they could bake better bread, or adapt their recipes to a different grain or oven, is trial and error. They could try adding a bit more water to see what that does to the loaf. They see it makes the loaf sink, so they try less and less water until they are back to getting a good result. They could try using less salt to see if that lets them cut down on the amount of sugar they need, but they just end up with a bland-tasting loaf nobody wants to buy. Some strains of wheat produce a nicely risen loaf while others always lead to flatbreads. Why is that?
If, instead, they had an understanding of the underlying chemistry that was going on, of how yeast, sugar, water and gluten work when mixed together, they could optimise the quantities of each ingredient, adapt their recipes for different grains and equipment, and save time and cost all in one go. No more failed loaves and no more wasted ingredients to explain away.
And that is how it is with cyber security. The Best Practices and international standards we use today provide us with an uncertain but probably sufficient level of security provided we are careful how we follow them and provided we are not operating in an unusually high threat environment or with technologies that suffer from lots of easily exploitable vulnerabilities. But we can’t easily optimise those practices to suit our particular threat and technology situation or to maximise cost efficiency.
And there is more. At least when baking bread, one can measure the success or failure of a recipe straight away. We can tell straight away if a loaf doesn’t look right or doesn't taste right. And if it doesn’t, we can try making another loaf to see if it was the recipe or something we did that was wrong. All we will have wasted is a few affordable ingredients and a small amount of time.
We can’t do that with our cyber security recipes. We don’t have any way to measure the amount of security protection a given set of practices provides. Protection (or risk, if you prefer) is an intangible. We can’t see it. We can’t touch it. All we can do is apply a set of security practices and wait for something to fail. And when we get a security failure, do we try the same recipe again to see if it was the recipe or something we did that was wrong? We don't want to be taking risks with our recipes because security failures can be disruptive and expensive, so we stick with the recipes despite the failures. We end up with a set of security recipes we cannot easily adjust, no way to measure how much protection we get from those recipes, inconsistent results from one enterprise to the next, and no certainty as to why they fail when they do.
We have learned to live with these shortcomings and limitations. But it doesn’t have to be this way. Look at how medical science has improved healthcare beyond anything doctors could have imagined even half a century ago. Look at how materials science has enabled engineers to build bridges over huge expanses of water. Brunel could never have done that. I won’t claim that treating cyber security as a science would save lives in the way medical science has but it could certainly revolutionise the way we practise cyber security and enable us to innovate in ways we can’t today.
We currently practise cyber security as a craft. This makes outcomes uncertain and limits our ability to adapt and innovate. Instead, we should try treating cyber security as a science. Then we could measure the inputs (threats, vulnerabilities), measure the outputs (protection or risk), set and adjust our controls to get the outputs we desire given the inputs and resource constraints we are dealing with, and manage and control the whole process with transparency and confidence.
We do know at least one way to treat cyber security as a science. It is called TBSE. TBSE might turn out not to be the only way to do this, but at the present time it is the only way I am aware of that anyone is proposing for doing this.
TBSE is a paradigm, a conceptual way of thinking about what is going on between threats, vulnerabilities and controls when threats engage with a system and give rise to risk. This paradigm works on the basis of these interactions being stochastic rather than deterministic.
This is key. Over the past five decades, many bright people have attempted to analyse security interactions deterministically. The reason why, despite all this effort, they have not been able to find any objective way to measure security protection or calculate security risk is that the processes that take place when risk is created are fundamentally stochastic. Deterministic analysis is simply the wrong way to go about analysing these dynamics and trying to calculate results. It is a bit like trying to explain disease in terms of the imbalance of humours rather than infection by pathogens. It is fundamentally at odds with how the thing works.
I am not the first person to make this point. Indeed, there has been plenty of work done over the past twenty years, mostly under the heading “The Economics of Information Security” that adopts this perspective. Many papers have been published showing how to analyse specific interactions non-deterministically to answer specific security questions. At least one paper has indicated how a group of three such analyses could be joined together like the carriages of a train to answer a slightly broader security question than the questions addressed by each of the three constituent analyses.
However, none of these papers has provided a fully general framework for doing a broad spectrum of these types of analysis or shown how multiple links can be brought together into a chain that, conceptually, can cover the whole process from the origin of a threat through to the operational outcomes and harms that that threat might cause.
This is what TBSE does and how it serves to turn cyber security into a science. TBSE is a stochastic paradigm that covers the full gamut of what goes on from the origin of a threat through to the material harms that threat causes. It provides a framework that allows us to analyse any risk-relevant interaction using stochastic modelling methods, and to calculate the effect of that interaction on the progress of the threat as it works its way towards causing harm. Being a paradigm rather than just an approach, TBSE shows us not just how to analyse individual risk-relevant interactions but how to combine a series of such analyses into a chain that, if followed to the end, provides an objective analytical result for the amount of risk a threat has created.
Making use of TBSE does not have to be a huge task. It doesn’t mean you have to stop doing any of the things you are already doing or replace practices and solutions you have invested in in the past. You can apply it one analysis at a time. You can apply it in a lite manner or in a more weighty analytical manner, as you choose. And you can take it as far and as fast as you wish.
The most straightforward place to start is security metrics. Many organisations already operate a number of security metrics. They measure various aspects of the implementation of their security solutions or activities and report those measurements against policy to drive implementation improvements. This is known as “security verification”. Verification is about showing that security controls are being applied as they should. It is not the same as “security validation”, which is about showing that those controls are actually providing the desired amount of security protection.
Security verification serves as a proxy for security validation. It works on the premise that if security controls are being applied, operated and managed to the standard required by policy then one can presume they are providing security protection to the level desired by the owners of those policies. In the absence of a scientific security methodology, this is about as much as one can normally do. It leads to metrics that lend themselves to dashboard RAG diagrams and regular periodic tracking but it doesn’t necessarily help you understand the things you need to understand to stay secure.
To move beyond security verification to security validation requires a conceptual understanding of how security controls provide security protection. This is where TBSE comes in. You can take an individual component of your security armoury, apply the TBSE paradigm to create a conceptual understanding of the dynamics going on there, and from that you can identify what you would need to measure or analyse to answer the risk management questions you want your metrics to answer.
For each security component you want to explore, you start by building a relatively simple stochastic model for how that activity or solution provides security protection. That will show you what data you need to gather upstream of that component, what data downstream, and what data you need about the implementation or operation of the component itself. You will gather that data to whatever level of precision you can achieve easily, and that will give you an initial understanding of the dynamics at work for that component.
That might be sufficiently complete to show you what you need to change about your security design to make a worthwhile improvement in the amount of security it provides. And if not, then it will show you where you need to add more detail (to the model or the data you gather) so you can answer the question at hand. As with any type of modelling, the general rule of thumb is the more detailed you can make the model and input data, the more detail you will get (and the more confidence you can have) in the results.
You can apply TBSE one step at a time. You take the analysis as far as you need to get the answers you want, you set up the measurement and reporting processes you need for your metric, and you start to operate the metric. When ready, you move on to the next aspect of your security armoury you want to improve. At your own speed, you develop a new set of metrics that lend themselves to dashboard RAG diagrams and regular periodic tracking, except that this time they do help you understand the things you need to understand so you can stay secure.
I have helped clients develop security metrics across a range of security areas including: Resistance to malware; Resistance to Phishing; Staff security awareness; End-user password strength; Perimeter resistance to intrusion attacks; Software vulnerability removal; DDoS protection; and others.
Exciting opportunities await those who treat cyber security as a science. If you would like to talk about any aspect of this in more detail, please get in touch. Email me at email@example.com or call 07734 311567 (+44 7734 311567).