free simple website templates

What does 'Good' look like and am I doing it?

Mobirise

For years, the answer to “What does ‘Good’ look like?” has been either your preferred statement of Best Practices or an (inter)national standard – in each case a one-size-fits-all comprehensive set of canonical controls. We have known for a long time that that was not a good answer but we haven’t had an alternative that was as simple and easy to state. Nobody likes “It depends” for an answer.

On top of that, the answer to “Am I doing it?” involves assessing compliance against that chosen set of controls. That usually takes a huge amount of effort and can be very complex. You will always get push-back.

I have three responses to this problem:

  • Like it or not, “It depends” is the correct answer to "What does Good look like?" However, see that as a positive opportunity, not a bottomless pit of complexity.
  • Assessing “Am I doing it?” doesn’t have to be huge and complex. It will always involve assessment and measurement, but that can be made simple and efficient.
  • This is one of those situations where ‘Perfect’ can be the enemy of ‘Good’. Having a straightforward methodology that provides a meaningful indication of your security posture is far better than trying to build a comprehensive and detailed GRC system and failing.

Let me outline how. And, by the way, building this type of lightweight methodology and a spreadsheet-based tool to go with it is the most popular of all the tasks clients ask me to undertake at present.

  • Step 1: Build a list of the systems that are most important to you. Most organisations know which systems those are, and you can start off with a shortlist and add more systems as time goes by.
  • Step 2: Build a simple ‘Protection’ classification scheme. This is, essentially, a BIA-lite, that shows, for each system that you classify, what level and type of protection it needs. Most clients use C, I and A as the security attributes that need protecting (keeping it simple), and have either 3 or 4 levels in the hierarchy. Sometimes P (for Personal Data) gets added as a fourth security attribute.
  • Step 3: Build a simple ‘Exposure’ classification scheme. With this you classify each system in terms of how exposed it is to threats. Usually this has two dimensions (one for internal threats, one for external threats) and 3 levels in the hierarchy (H, M, L).
  • Step 4: Build a structured controls catalogue. This is your preferred comprehensive set of canonical controls but structured in a way that reflects your organisation’s business and technology model. Often it is structured into something that looks like: Controls applied at the organisational level; Controls applied at the infrastructure level; Controls applied at the application level; Controls applied at the Data Centre/Cloud level. Typically, you can expect to have 7 (±) major groups of controls like this in your controls catalogue.
  • Step 5: Map each control in the catalogue to your two classification schemes. That way, a system’s Protection and Exposure classifications determine which controls in the catalogue apply to it and which don’t. Systems with low classifications attract only a small number of basic controls. Those with high classifications attract a larger number of controls, with the additional controls reflecting the system’s needs.
  • Step 6: Map each control in the catalogue to the Top Threats it protects against. Most organisations these days have a list of their Top Security Threats. Where they sometimes go wrong is to make those threats too generic. For example, “External threats against system availability” is too broad to be truly useful; “Ransomware” is more specific and will probably be of more interest to the Leadership Team. Do any further mappings that interest you, for example you could map controls against your ‘Defence in Depth’ categories.
  • Step 7: For each of the non-system-level controls in your catalogue, do the required assessment. Assess the organisation as a whole against the organisation-level controls. Assess each in-house data centre, each WAN, and each cloud provider you use against the relevant level of controls. You will need to do each of these assessments only once. Each system you assess that runs in an assessed data centre or cloud environment will ‘inherit’ those assessment results. You won’t need to repeat these shared component assessments each time. And remember, ‘Perfect’ is the enemy of ‘Good’. These assessments are not meant to be as detailed as audits, otherwise the whole methodology will break down under its own weight.
  • Step 8: For each of your important exposed systems, assess it against the system-level controls that are left. Combine its results with the results for the shared components it uses and plot on a 2D Heat Map to show security posture against risk appetite. You can show overall results, results per threat, results per security attribute (C, I and A), results for each category in your ‘Defence in Depth’ architecture, whichever results you want to show. You will see immediatly which systems are within appetite and which are not, from each of the perspectives you show.
  • Step 9: For each system that is outside appetite or doesn’t have a good balance of controls, build a mitigation plan and use the tool to see in advance what that system’s security posture would be if that plan were applied. Adjust the mitigation plan until it brings the system into appetite.

Rinse and repeat the last two steps.

As I mentioned, building this type of lightweight methodology and a spreadsheet-based tool to go with it is the most popular of all the tasks clients ask me to undertake at present. If you would like to find out more about this, please get in touch.  Email me at john.leach@jlis.co.uk or call 07734 311567 (+44 7734 311567).

© Copyright 2020 JLIS Ltd